Difference between ZFS checksum algorithms

Hello all.
Where i can find information about differences between fletcher2, fletcher4, and sha256 algorithms for ZFS checksums?
Thank you.

You should learn to use google. First hit on "zfs checksum algorithms" gave me:
http://www.opensolaris.org/os/community/zfs/source/;jsessionid=2686F92391554A5B939B93197A2B443C
In short: at the moment only 1 algorithm can be used, it wouldn't surprise me if the others haven't been fully implemented yet and that would mean that there are not much differences since you only have 1 option to use right now.

Similar Messages

  • Difference between Zfs volume and filesystem ?

    Hi,
    Does any one know the difference between Zfs volume and Zfs filesystem?
    On one of the existing nodes i saw the following enries for two times....
    root@node11> zfs get all rpool/dump
    NAME PROPERTY VALUE SOURCE
    rpool/dump  type                  volume -
    rpool/dump creation Thu Feb 18 13:55 2010 -
    rpool/dump used 1.00G -
    rpool/dump available 261G -
    rpool/dump referenced 1.00G -
    rpool/dump compressratio 1.00x -
    rpool/dump reservation none default
    rpool/dump volsize 1G -
    rpool/dump volblocksize 128K -
    root@node11> zfs get all rpool/ROOT/firstbe/opt/SMAW
    NAME PROPERTY VALUE SOURCE
    rpool/ROOT/firstbe/opt/SMAW  type                  filesystem -
    rpool/ROOT/firstbe/opt/SMAW creation Thu Feb 18 14:03 2010 -
    rpool/ROOT/firstbe/opt/SMAW used 609M -
    rpool/ROOT/firstbe/opt/SMAW available 264G -
    rpool/ROOT/firstbe/opt/SMAW referenced 609M -
    rpool/ROOT/firstbe/opt/SMAW compressratio 1.00x -
    rpool/ROOT/firstbe/opt/SMAW mounted yes -
    rpool/ROOT/firstbe/opt/SMAW quota none default
    rpool/ROOT/firstbe/opt/SMAW reservation 4G local
    rpool/ROOT/firstbe/opt/SMAW recordsize 128K default
    rpool/ROOT/firstbe/opt/SMAW mountpoint /opt/SMAW inherited from rpool/ROOT/firstbe
    root@node11> zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool/dump 1.00G 261G 1.00G -
    rpool/ROOT/firstbe/opt/SMAW 609M 264G 609M /opt/SMAW
    Regards,
    Nitin K

    nitin.k wrote:
    Hi,
    Does any one know the difference between Zfs volume and Zfs filesystem?A volume is a block device. A filesystem is a mounted point for file access.
    For most users, a volume isn't normally necessary except for 'dump'.
    Darren

  • The difference between SSL & TLS

    dear experts,
    i need to know The difference between SSL & TLS and in which situations i should i have to use them.
    thanks
    Labib Makar

    Labib,
    At a 10,000 foot level v3.0 was superceded by . v1.0.
    TLSv1.0 (RFC 4346) was an upgrade to SSL v3.0 (but they don't interoperate)
    This "Cisco.com document" describes the workings of both in some detail:  SSL: Foundation for Web Security
    it states this as some basic differences:
    TLS uses slightly different cryptographic algorithms for such things as the MAC function generation of secret keys. TLS also includes more alert codes.
    Also See: Wikipedia TLS
    As far as which to use, it would depend on if both sides (server/client) support each?  TLS v1.0 or v1.1 is newer.
    Most modern Browsers tend to support both.
    i.e.
    Firefox 3.5.7 supported both SSL v3.0 and TLS v1.0
    Internet Explorer v6 supported both SSLv2, SSLv3, TLS v1.0
    etc.
    Hope that helps.
    Steve Ochmanski

  • What is difference between C# Gzip and Java swing GZIPOutputStream?

    Hi All,
    I have a Java swing tool where i can compress file inputs and we have C# tool.
    I am using GZIPOutputStream to compress the stream .
    I found the difference between C# and Java Gzip compression while a compressing a file (temp.gif ) -
    After Compression of temp.gif file in C# - compressed file size increased
    while in java i found a 2% percentage of compression of data.
    Could you please tell me , can i achieve same output in Java as compared to C# using GZIPOutputStream ?
    Thank a lot in advance.

    797957 wrote:
    Does java provides a better compression than C#?no idea, i don't do c# programming. and, your question is most likely really: "does java default to a higher compression level than c#".
    Btw what is faster compression vs. better compression?meaning, does the code spend more time/effort trying to compress the data (slower but better compression) or less time/effort trying to compress the data (faster but worse compression). most compression algorithms allow you to control this tradeoff depending on whether you care more about cpu time or disk/memory space.

  • Better estimation of phase difference between two signals with variable frequency!

    Hello LabView Gurus, 
    Being a power engineer and having just a little knowledge of signal processing and labview, I have been pulling my hair out for the last couple of days to get a better estimation of phase difference between two signals.
    We have two analog voltage signals; 1. sine wave (50Hz ± 1Hz) and 2. a square wave with exactly half of sine wave frequency at any time.
    At the starting point of operation (and simulation/acquisition) both signals will have no phase difference. However, the square wave's frequency changes unpredictably for a just a few millisecond but then it gets synchronized with sine wave's frequency again. This means that the square wave will be phased out from its original position. The task of the labview is to find the phase difference between the two signals continuously.
    My approach to determine the phase difference is to measure the time when sine wave crosses zero amplitude and the time when the very next square wave changes amplitude from zero volts to +ve voltage (I have a 0.5volts threshold just to avoid any dramas from small line noise). The difference between these times is then divided by the time period and multiplied by 360 to get this phase difference in angles. 
    As this part is just a small block of a big project, I can only allow 5000Hz sampling rate each for both signals. I read 500 samples (which means I read data from 5 cycles of sine wave and 2.5 cycles of square wave).
    Now the problem is, as long as the frequency of sine wave stays constant at exactly 50Hz, I get a good estimation of the phase difference but when the frequency changes even a little (and it will happen in the real scenario i.e 50Hz ± 1Hz  and the square wave's frequency is dependent of sine wave's frequency), the estimation error increases.
    I have attached my labview program. From front panel, you can set the phase of square wave (between -180 and 0) and you should see the labview's calculated phase in the indicator box named 'Phase'. Then you can press 'Real Frequency' switch that would cause the frequency to change like it would in real operation.
    You can observe that the estimation error increases after you push the button. 
    All I need to do is to reduce this estimation error and make it as close to the actual phase difference as possible. Any help would be greatly appreciated.
    I am using LabView 2009 for this task.
    The application is for electric machines and the stability/performance of machines under different faults.
    Thank you for reading this far!
    Regards,
    Awais
    Attachments:
    v603.png ‏320 KB
    v603.vi ‏186 KB

    Jeff Bohrer wrote:
    Basic math gives me a bit of pause on this approach.  You are sampling at 50 times the frequency of interest so you get 50 samples per cycle.  your phase resolution is 1/50th cycle or 7.2 degrees +/- noise.  You will need to samlpe faster to reduce phase resolution or average multiple readings (at a time cost that is signifigant)
    Jeff- (Hardly Working)
    I am sampling at 100 times the sine wave's frequency and 200 times the square wave's frequency.  Increasing the sampling rate completely solves my problem. But since I am acquiring several other inputs, I cannot afford a sampling rate higher than 5kHz.
    F. Schubert wrote:
    I'm not a signal processing expert, but here my basic understanding.
    If you simulate sampling with 5kHz and a frequency of 50 Hz (and both are 'sync' by design), you always get an exact 5 periods. Any variation of your signals frequency gives you a propability to get 4 or 6 'trigger' events. That's an up or down of 20%!
    The one measure to reduce such problems is using 'window functions'. They don't fit your current approach (counting instead of a DSP algorithm), so this needs to be reworked as well.
    My approach would be to use the concept of a Locki-In amplifier. You need to phaseshift your ref-signal by 90°. Then multiply your measurement signal with the ref signal and the phase shifted ref signal. The obtained values for x/y coordinates of a complex number. Calculate the theta of the complex number (with the LV prim). Feed this in a low pass filter.
    The trick on this is, that the square wave has harmonics in it, in this you are interested in the second harmonic which is the sine wave.
    To get rid of the effect that the sync between sampling rate and ref signal frequency gives an error, you then can use the window I mentioned above (place it before the lock-in).
    For a design that really plays well, use a producer-consumer design pattern to get the calculations done in parallel with the DAQ.
    I suggest you to check on wikipedia for some of the keywords I mentioned. Go also for the external links which lead to great tutorials and AppNotes on the signal processing basics.
    Sorry, it's not a simple solution I offer and we will have quite some conversation on this forum if you follow this path. Maybe someone else knows a simpler way.
    Felix
    www.aescusoft.de
    My latest community nugget on producer/consumer design
    My current blog: A journey through uml
    An interesting view. the sine wave can indeed be looked as a second harmonic of the square wave. I will implement your idea and get back to you as soon as I get some results. But since I have very limited knowledge of signal processing, it might take me a while to get my hear around the solution you mentioned.

  • Hi I want to know the difference between the type of internal tables.

    I know the types of internal table but i dont know the difference between them can any one explain me in simple sentence.

    Hi,
    <b>Standard Internal Tables</b>
    Standard tables have a linear index. You can access them using either the index or the key. If you use the key, the response time is in linear relationship to the number of table entries. The key of a standard table is always non-unique, and you may not include any specification for the uniqueness in the table definition.
    This table type is particularly appropriate if you want to address individual table entries using the index. This is the quickest way to access table entries. To fill a standard table, append lines using the (APPEND) statement. You should read, modify and delete lines by referring to the index (INDEX option with the relevant ABAP command). The response time for accessing a standard table is in linear relation to the number of table entries. If you need to use key access, standard tables are appropriate if you can fill and process the table in separate steps. For example, you can fill a standard table by appending records and then sort it. If you then use key access with the binary search option (BINARY), the response time is in logarithmic relation to
    the number of table entries.
    <b>Sorted Internal Tables</b>
    Sorted tables are always saved correctly sorted by key. They also have a linear key, and, like standard tables, you can access them using either the table index or the key. When you use the key, the response time is in logarithmic relationship to the number of table entries, since the system uses a binary search. The key of a sorted table can be either unique, or non-unique, and you must specify either UNIQUE or NON-UNIQUE in the table definition. Standard tables and sorted tables both belong to the generic group index tables.
    This table type is particularly suitable if you want the table to be sorted while you are still adding entries to it. You fill the table using the (INSERT) statement, according to the sort sequence defined in the table key. Table entries that do not fit are recognised before they are inserted. The response time for access using the key is in logarithmic relation to the number of
    table entries, since the system automatically uses a binary search. Sorted tables are appropriate for partially sequential processing in a LOOP, as long as the WHERE condition contains the beginning of the table key.
    <b>Hashed Internal Tables</b>
    Hashes tables have no internal linear index. You can only access hashed tables by specifying the key. The response time is constant, regardless of the number of table entries, since the search uses a hash algorithm. The key of a hashed table must be unique, and you must specify UNIQUE in the table definition.
    This table type is particularly suitable if you want mainly to use key access for table entries. You cannot access hashed tables using the index. When you use key access, the response time remains constant, regardless of the number of table entries. As with database tables, the key of a hashed table is always unique. Hashed tables are therefore a useful way of constructing and
    using internal tables that are similar to database tables.

  • Difference between keep pool Vs Recycle pool vs  Default pool

    Good Morning experts ;
    I need some differences between Keep pool Vs Recycle pool vs  Default pool.
    How it acts differ from each other.
    Thanks in advance ..

    8f953842-815b-4d8c-833d-f2a3dd51e602 wrote:
      Thanks for your answer  MARG.
    If i  pin  an object into shared pool , entire object ( all blocks) comes under into  buffer pool.
    but you say depending on the query plan ,
    Oracle will place only portions of objects into the buffer cache at any one time.
    Example :
    >> TO pin a table
    SQL> alter table emp  storage (buffer_pool keep);             
    This table having  1 million record and it contains 'n' of columns.
    Consider  i need o/p  from name , emp_id, salary columns only.
    i.e. who are getting salary more than  8000$.
    Oracle will show required o/p. As per your explanation , i cant guess  ..
    Question :  how oracle will place only portions of objects into the buffer instead of entire object ?
    Please elaborate little more .
    Oracle uses blocks.  The rows are in blocks.  When you ask for a column in a row, Oracle has to get the block.  When you ask for a couple of columns from many rows, Oracle has to get many blocks.  Oracle makes copies of blocks.  Oracle has to manage possibly many people accessing the same  or different rows in those blocks.  Each one needs to have the block appear as though it did when the transaction started.
    Oracle has many ways to get the blocks.  It can look in the SGA, if an appropriate one is not there it can read it from disk, or it may decide to read many blocks at once from a disk, or it could even decide to just read as much as it can into a user's PGA, perhaps also going to undo in any of those ways to make a read consistent copy for the user.
    So when you look at statistics for a session, you might see sequential gets or scattered gets.  The former is often from index access, then a single block is gotten from wherever, and placed in the SGA.  The latter is often from scanning, and the blocks are scattered about, as they aren't necessarily going to be gotten in an order.  Remember, an Oracle block may be a number of operating system blocks, and a multi[-oracle]-block read may be a lot of data.
    So, with all these blocks going into the SGA, it has to decide what stays and what goes.  It uses a least-recently-used (LRU) algorithm to eject blocks, and may read blocks into the middle or the end of the list, depending.  That's why the default buffer pool works so well, anything continually accessed will in the grand scheme of things be kept hot and stay there.  When SGA's were much smaller, it was a lot easier to have not-quite-so-hot things get ejected and written out, only to be read in soon after, so the alternate pools would allow those places to be kept, or recycled, as arbitrarily defined.
    So think of blocks as the portion of objects in the SGA.  There usually are multiple copies of blocks.

  • How to find the phase difference between two signals using Hilbert transform

    hi, 
        I am new to LabView.... I am trying to find phase difference between two signals. I sucessfuly found out the phase difference between two predefined waves using single tone measurement. .... But I really want to know how can I measure phase difference between two signals( not predefined... ie we don't know the initial conditions) using hilbert transform or any transformation techniques (without using zero cross detection).. I tried by using hilbert transform based on algorithm... bt I am getting error.... plz help me
    Attachments:
    phase_differece.vi ‏66 KB

    you could try something similar to this, for each table pair that you want to compare:
    SELECT 'TABLE_A has these columns that are not in TABLE_B', DIFF.*
      FROM (
            SELECT  COLUMN_NAME, DATA_TYPE, DATA_LENGTH
              FROM all_tab_columns
             WHERE table_name = 'TABLE_A'
             MINUS
            SELECT COLUMN_NAME, DATA_TYPE, DATA_LENGTH
              FROM all_tab_columns
             WHERE table_name = 'TABLE_B'
          ) DIFF
    UNION
    SELECT 'TABLE_B has these columns that are not in TABLE_A', DIFF.*
      FROM (
            SELECT COLUMN_NAME, DATA_TYPE, DATA_LENGTH
              FROM all_tab_columns
             WHERE table_name = 'TABLE_B'
             MINUS
            SELECT COLUMN_NAME, DATA_TYPE, DATA_LENGTH
              FROM all_tab_columns
             WHERE table_name = 'TABLE_A'
          ) DIFF;that's assuming, column_name, data_type and data_length are all you want to compare on.

  • Difference between fully-specified data types. and generic types

    Hi,
    Can anyone tell me the difference between fully-specified data types and generic types.
    Thanks in advance.
    Regards,
    P.S.

    HI
    Generic table types
    INDEX TABLE
    For creating a generic table type with index access.
    ANY TABLE
    For creating a fully-generic table type.
    Data types defined using generic types can currently only be used for field symbols and for interface parameters in procedures . The generic type INDEX TABLEincludes standard tables and sorted tables. These are the two table types for which index access is allowed. You cannot pass hashed tables to field symbols or interface parameters defined in this way. The generic type ANY TABLE can represent any table. You can pass tables of all three types to field symbols and interface parameters defined in this way. However, these field symbols and parameters will then only allow operations that are possible for all tables, that is, index operations are not allowed.
    Fully-Specified Table Types
    STANDARD TABLE or TABLE
    For creating standard tables.
    SORTED TABLE
    For creating sorted tables.
    HASHED TABLE
    For creating hashed tables.
    Fully-specified table types determine how the system will access the entries in the table in key operations. It uses a linear search for standard tables, a binary search for sorted tables, and a search using a hash algorithm for hashed tables.
    Fully-specified table types determine how the system will access the entries in the table in key operations. It uses a linear search for standard tables, a binary search for sorted tables, and a search using a hash algorithm for hashed tables.
    see this link
    http://help.sap.com/saphelp_nw04/helpdata/en/fc/eb366d358411d1829f0000e829fbfe/content.htm
    <b>Reward if usefull</b>

  • Difference between Simple, EPC Gen 1 & EPC Gen 2 tags?

    Hi All,
    1. Can some one explain me the difference between Simple, EPC Gen 1 & EPC Gen 2 tags?
    2. When are they used?
    3. Which is the one, widely used?
    Thanks,
    Shridhar..

    Hi Shridhar,
    Actually earlier I only mentioned the main difference between the two that was the Q Algorithm and mentioned some links which would help you to get some more differences.
    From your reply I thought should make more easier for you.I have segregated some differences for you from those Links .
    <b>Difference between GEN1 and GEN2 tags.</b>
    <b>New features that were incorporated in the Gen 2 protocol.</b>
    <b>1.The biggest difference between Gen 1 and Gen 2 is that there is now a single global protocol.</b> The first-generation epc had two protocols, Class 0 and Class 1, and the same reader could not read both unless it was a multiprotocol reader. The International Organization for Standardization (iso) also approved two uhf air-interface protocols, 18000-6A and 18000-6B, as international standards, so there have been four uhf standards.
    <b>
    2. Another important aspect of the UHF Gen 2 protocol is it was designed to optimize performance in different regulatory environments around the world.</b>  
        Europe’s communications authoritiesrecently adopted reader regulations that are more relaxed, but the new rules are still quite stringent compared with those in the North America. Because the Gen 2 protocol uses the available radio spectrum more efficiently, it will provide better performance in Europe than any other uhf protocol. “Gen2 creates a good foundation for higher-function products, such as Class 2 and Class 3 tags and readers”.
    <b>3. Dense-reader mode</b>
    The Gen 2 standard allows readers to operate in three different modes: Single-reader mode, multi-reader mode and dense-reader mode. To function optimally, readers will need to operate in
    dense-reader mode when more than 50 readers are present within a building, such as within a distribution center. Dense reader mode is designed to prevent readers from interfering with
    one another, which could be a problem if many readers are used in a confined space, particularly in Europe and other regions where only a small band of the uhf spectrum has been allotted
    for rfid systems.
    <b>4. Dual methods of backscatter encoding</b>
    The Gen 2 protocol also supports another method of encoding the backscatter signal called FMO. The purpose of allowing the reader to use either FMO or Miller subcarrier was to improve
    performance not just when there are many readers in a facility but also when there is a lot of noise in the area.
    FMO, a format used effectively in the current ISO standards, is fast but susceptible to interference. Miller subcarrier is slower but works better in noisy environments.
    <b>5. Secure read-write memory</b>
    First-generation EPC Class 0 tags are programmed at the factory, when the chips are made. First-generation Class 1 tags are user programmable, meaning that an end user company can write EPC’s to the tag after taking delivery. In most applications today, Class 1 tags are programmed one by one as they come off a spool.
    Gen 2 tags are field programmable, meaning that readers can write information to tags even if they are attached to cases on a pallet or a conveyer belt. Gen 2 tags will feature three required memory banks—one bank for storing the epc, one for passwords, one for tag identification (the tag stores information about itself)—and an optional bank for memory that end users can use for whatever purpose they wish (one of the few optional features tags can have). User memory could be to store codes to indicate where products are being shipped to, for instance.
    The memory banks can be locked temporarily or permanently.So a product supplier might write an epc to a tag and lock it permanently. It might then write the identification number of a store that the product is being shipped to in the optional user memory. The supplier might lock that memory with a password to avoid having it overwritten, but a manager in the store’s distribution center might have the option of unlocking the memory (if the manufacturer supplies the password), changing the store id to indicate the destination has changed, and then locking the memory again.
    <b>6. The Q algorithm</b>
    One issue with the Gen 1 protocols is that they require rfid readers to use the tags’ unique serial numbers to singulate tags (to identify them uniquely). If two tags have the same epc, they confuse the reader. Some retailers are considering using tags with the same epc—that is, information similar to what they have on bar codes today—as an interim measure as they move from bar codes to rfid and prepare their software systems to handle unique ids. Those retailers asked EPCglobal's Hardware Action Group to make it possible to singulate the tags even if two or
    more tags have the exact same epc.
    This Algorithm is mentioned in my first reply above.
    <b>7. Sessions</b>
    One weakness of the Gen1 protocols was the possibility that one reader would interfere with another reader's ongoing counting of a group of tags. So let's say a fixed reader is counting all the tagged items on a shelf. It reads a tag and commands it to go to sleep so it can read the next tag.When it is halfway through 100 items, someone comes along with a handheld reader, looking for a specific item on that shelf. The handheld commands all the tags to wake up and respond. Now the fixed reader has to start the counting all over again. To avoid this problem, the Gen 2 protocol introduces something called sessions. Each tag will be able to operate within four
    separate sessions. A retailer or manufacturer could set up their system so that all fixed readers read tags in session 1, and all handhelds use session 2. So if the fixed reader puts the tags to
    sleep in session 1, the handheld reader could communicate with the tags in session 2 and not interfere with the ongoing count by the fixed reader in session 1.
    <b>Enhancements made to the Gen 1 protocols.</b>
    <b>
    1.Faster read rates:</b>
    The Gen 2 protocol is designed to enable readers to read data from and write data to rfid tags much faster than the Gen 1 protocols. Gen 2 supports a tag-to-reader data transfer rate of up
    to 640 kilobits per second, versus up to 80 kilobits per second for Gen 1 Class 0 and 140 kilobits per second for Gen 1 Class 1.
    <b>2. Fewer ghost reads</b>
    One problem early adopters have encountered with the Gen 1 Class 0 protocol is ghost reads. Sometimes the reader thinks it has read a tag with a particular Id when no tag with that id is present.
    <b>3. Longer passwords</b>.
    Now to talk about your queries.
    >>Q's:1. What if the Tag ID is alpha-numeric?
    let me know from which link you  read this so that I can comment on it.(If i missed it).
    >>Gen 2:These type of Tags have the ability to generate random numbers. The reader asks the tag to generate random numbers.
    Q's:1. What is the match/criteria upon which Tag/s respond to the reader with the EPC, after which the reader can continue the flow.
    Now if you again go through the Q Algorithm mentioned in my above reply.This query is answered.
    Well I again mention the part of the Algorithm
            " Gen 2 tags have the ability to generate random numbers. The reader will tell the tags the range in which it should generate a random number by issuing a query command with a Q value ranging from 0 to 15. If it often gets back no response to its queries, it will automatically decrease the Q value. If it gets more than one tag responding, it will increase the Q value, thereby increasing the range of numbers that can be generated by the tags. The reader might issue a query with a parameter of Q=4. The tags generate two random numbers, the first one ranging from zero and 65,535, and the second ranging from zero and 2 to the power of Q, minus 1. If Q is four, then 2 to the fourth power is 16, minus 1 equals 15.  So all tags choose a second random number ranging from zero and 15. The reader asks any tag that chose zero for their second random number to respond. If one tag has zero, then it responds with the first random number, between zero and 65,535, and the reader acknowledges it. Since the tag has now been singulated,the reader could simply count the tag as present ("I know a tag with a random number of 45,101 is in the field"). It could write an epc to the tag, if it doesn't have one, or it could ask tag 45,101 for its epc if it does have one. It then asks the remaining tags to subtract one from their second random number and singulates the next tag that has a zero, and it keeps doing that until all the tags are singulated. If no tags choose zero for their first random number, then the reader asks all the tags to decrement their random number by one, and it keeps doing that until a tag with zero responds. If two tags respond, the reader can't read either tag, so it issues a negative acknowledge, which tells the tags to wait for another query until they respond again. "This protocol makes it extremely unlikely that a reader will singulate two tags when it meant to only talk to one" . "
    The above mentioned is the match/criteria upon which Tag/s respond to the reader with the EPC, after which the reader can continue the flow.
    I suppose this should clear all your doubts.Let me know if still there are any.
    Thanks,
    Pawan

  • Difference between macro and subroutine

    what is the difference between macro and subroutine? i
    need some example on macro

    Hi,
    <b>
    Subroutines</b>
    Subroutines are procedures that you can define in any ABAP program and also
    call from any program. Subroutines are normally called internally, that is, they
    contain sections of code or algorithms that are used frequently locally. If you want
    a function to be reusable throughout the system, use a function module.
    <b>Defining Subroutines</b>
    A subroutine is a block of code introduced by FORM and concluded by ENDFORM.
    FORM <subr> [USING ... [VALUE(]<pi>[)] [TYPE <t>|LIKE <f>]... ]
    [CHANGING... [VALUE(]<pi>[)] [TYPE <t>|LIKE <f>]... ].
    ENDFORM.
    <subr> is the name of the subroutine. The optional additions USING and
    CHANGING define the parameter interface. Like any other processing block,
    subroutines cannot be nested. You should therefore place your subroutine
    definitions at the end of the program, especially for executable programs (type 1).
    In this way, you eliminate the risk of accidentally ending an event block in the
    wrong place by inserting a FORM...ENDFORM block.
    <b>Macros</b>
    If you want to reuse the same set of statements more than once in a program, you can include
    them in a macro. For example, this can be useful for long calculations or complex WRITE
    statements. You can only use a macro within the program in which it is defined, and it can only
    be called in lines of the program following its definition.
    The following statement block defines a macro <macro>:
    DEFINE <macro>.
    <statements>
    END-OF-DEFINITION.
    You must specify complete statements between DEFINE and END-OF-DEFINITION. These
    statements can contain up to nine placeholders (&1, &2, ..., &9). You must define the macro
    before the point in the program at which you want to use it.
    Macros do not belong to the definition part of the program. This means that the DEFINE...ENDOF-
    DEFINITION block is not interpreted before the processing blocks in the program. At the
    same time, however, macros are not operational statements that are executed within a
    processing block at runtime. When the program is generated, macro definitions are not taken
    into account at the point at which they are defined. For this reason, they do not appear in the
    overview of the structure of ABAP programs [Page 44].
    A macro definition inserts a form of shortcut at any point in a program and can be used at any
    subsequent point in the program. As the programmer, you must ensure that the macro
    definition occurs in the program before the macro itself is used. Particular care is required if you
    use both macros and include programs, since not all include programs are included in the syntax
    check (exception: TOP include).
    To use a macro, use the following form:
    <macro> [<p1> <p2> ... <p9>].
    When the program is generated, the system replaces <macro> by the defined statements and
    each placeholder &i by the parameter <pi>. You can use macros within macros. However, a
    macro cannot call itself.
    DATA: RESULT TYPE I,
    N1 TYPE I VALUE 5,
    N2 TYPE I VALUE 6.
    DEFINE OPERATION.
    RESULT = &1 &2 &3.
    OUTPUT &1 &2 &3 RESULT.
    END-OF-DEFINITION.
    DEFINE OUTPUT.
    WRITE: / 'The result of &1 &2 &3 is', &4.
    END-OF-DEFINITION.
    OPERATION 4 + 3.
    OPERATION 2 ** 7.
    OPERATION N2 - N1.
    The produces the following output:
    The result of 4 + 3 is 7
    The result of 2 ** 7 is 128
    The result of N2 - N1 is 1
    Here, two macros, OPERATION and OUTPUT, are defined. OUTPUT is nested in
    OPERATION. OPERATION is called three times with different parameters. Note
    how the placeholders &1, &2, ... are replaced in the macros.
    Rgds,
    Prakash

  • What is the difference between standard,sorted and hash table

    <b>can anyone say what is the difference between standard,sorted and hash tabl</b>

    Hi,
    Standard Tables:
    Standard tables have a linear index. You can access them using either the index or the key. If you use the key, the response time is in linear relationship to the number of table entries. The key of a standard table is always non-unique, and you may not include any specification for the uniqueness in the table definition.
    This table type is particularly appropriate if you want to address individual table entries using the index. This is the quickest way to access table entries. To fill a standard table, append lines using the (APPEND) statement. You should read, modify and delete lines by referring to the index (INDEX option with the relevant ABAP command). The response time for accessing a standard table is in linear relation to the number of table entries. If you need to use key access, standard tables are appropriate if you can fill and process the table in separate steps. For example, you can fill a standard table by appending records and then sort it. If you then use key access with the binary search option (BINARY), the response time is in logarithmic relation to
    the number of table entries.
    Sorted Tables:
    Sorted tables are always saved correctly sorted by key. They also have a linear key, and, like standard tables, you can access them using either the table index or the key. When you use the key, the response time is in logarithmic relationship to the number of table entries, since the system uses a binary search. The key of a sorted table can be either unique, or non-unique, and you must specify either UNIQUE or NON-UNIQUE in the table definition. Standard tables and sorted tables both belong to the generic group index tables.
    This table type is particularly suitable if you want the table to be sorted while you are still adding entries to it. You fill the table using the (INSERT) statement, according to the sort sequence defined in the table key. Table entries that do not fit are recognised before they are inserted. The response time for access using the key is in logarithmic relation to the number of
    table entries, since the system automatically uses a binary search. Sorted tables are appropriate for partially sequential processing in a LOOP, as long as the WHERE condition contains the beginning of the table key.
    Hashed Tables:
    Hashes tables have no internal linear index. You can only access hashed tables by specifying the key. The response time is constant, regardless of the number of table entries, since the search uses a hash algorithm. The key of a hashed table must be unique, and you must specify UNIQUE in the table definition.
    This table type is particularly suitable if you want mainly to use key access for table entries. You cannot access hashed tables using the index. When you use key access, the response time remains constant, regardless of the number of table entries. As with database tables, the key of a hashed table is always unique. Hashed tables are therefore a useful way of constructing and
    using internal tables that are similar to database tables.
    Regards,
    Ferry Lianto

  • Difference between WEP, WPA, and WPA2 and better suggestion to use for shared family users

    What is the difference between WEP, WPA, and WPA2? My router is set up on my family PC and connected to a modem so I access Wi-fi through my laptop and my sister has a laptop too and uses our family network to get internet. I just set up a WPA today, so will we all be able to get internet (along with my family using the pc, and my sis on her laptop, even at the same time) protected? (like nobody else using our network)

    Wired Equivalent Privacy, commonly called WEP is 802.11's first hardware form of security where both the WAP and the user are configured with an encryption key of either 64 bits or 128 bits in HEX. So when the user attempts to authenticate, the AP issues a random challenge. The user then returns the challenge, encrypted with the key. The AP decrypts this challenge and if it matches the original the client is authenticated. The problem with WEP is that the key is static, which means with a little time and the right tool a hacker could use reverse-engineering to derive the encryption key. It is important to note that this process does affect the transmission speed.
    WPA builds upon WEP, making it more secure by adding extra security algorithms and mechanisms to fight intrusion.
    WiFi Protected Access (WPA) is the new security standard adopted by the WiFi Alliance consortium. WiFi compliance ensures interoperability between different manufacturer’s equipment.WPA delivers a level of security way beyond anything that WEP can offer, bridges the gap between WEP and 802.11i networks, and has the advantage that the firmware in older equipment may be upgradeable.
    WPA2 is based upon the Institute for Electrical and Electronics Engineers’ (IEEE) 802.11i amendment to the 802.11 standard, which was ratified on July 29, 2004. The primary difference between WPA and WPA2 is that WPA2 uses a more advanced encryption technique called AES (Advanced Encryption Standard), allowing for compliance with FIPS140-2 government security requirements. 

  • What is the fundamental difference between classful and classless routing?

    Hello to all,
    After reading several RFCs, guides and HOWTOs I am confused by an apparently trivial question - what is the basic, fundamental difference between classful and classless routing?
    I am well aware that - said in a very primitive way - the classful routing does not make use of netmasks and instead uses the address classes while the classless routing utilizes the netmasks and does not evaluate the address classes.
    However, already in 1985 the RFC 950 (Internet Standard Subnetting Procedure) stated that the networks can be further subnetted using the network mask. Since then the routers are expected to use network masks in the routing decision process in the precise way they use it nowadays. However, if the routers use network masks they are doing the classless routing, aren't they? Where is then the difference if we used to describe the 80's way of routing as a classful routing? Or was it already the classless routing? The RFCs about CIDR came gradually only in 1992 and 1993.
    If somebody could give me an insight into the key difference between classful and classless routing (and perhaps into the Internet history, how was the real routing done then) I would be most grateful.
    Thank you a lot!
    Regards,
    Peter

    Hello Mohammed,
    I am afraid we still have not understood each other ;) I am not looking for the algorithms used to select the best path. I am well aware of them, both Ford-Bellman and Dijkstra, and about their internals. By the way, these algorithms do not have any influence whether the routing is classful or classless because they deal with metrics, not with masks. For example, a classless EIGRP internally uses a distance-vector algorithm, not a SPF algorithm.
    I will try to explain once more what is my problem... There are two terms commonly used but badly defined: the classless routing and classful routing. Originally, I have thought that the classful routing works as follows:
    - The routing table consists only of classful destination networks (major nets), metrics and respective gateways. No network masks are stored in the table because we are classful, that is, we use exclusively the route classes and all entries in the routing table are already classful.
    - When routing a packet, the router looks at its destination IP address and determines the major net of this IP address (that is, the classful network that this IP address belongs to). Then it looks up the corresponding entry in the routing table and sends the packet to the respective gateway.
    I thought that the classful routing works in this way. I won't describe the classless routing - both of us know how do the today's routers select the next hop.
    However, in the RFCs 917 and 950 which were published in 1985, long ago before the term 'classless routing' was coined, the network mask was already defined and it was stated how the routers should work with it.
    Now I am confused. The terms classless addresses and classless routing were defined sometime in 1990's, therefore I assume that the routing before the invention of classless IP assignment can be in fact described as classful. In other words, I thought that the routing that was commonly used in 1980's did not use netmasks and can be described as classful because the notion of classlessness came first in 1990's. But now I see that netmasks were defined in 1985.
    Now where am I wrong? Do I understand the classful routing properly as I described it? Is it correct to talk about routing in that era as classful although the netmasks were already in use? Or was it already the classless routing?
    Basically I am trying to understand what was called the classful routing if the classless routing is said to be something different.
    Mohammed, I am most grateful to you for your patience and suggestions! Thank you indeed.
    Regards,
    Peter

  • Difference between Master Idoc and Communication Idoc.

    Can anyone list out the difference between Master Idoc and Communication Idoc?

    IDoc (for intermediate document) is a standard data structure for electronic data interchange (EDI) between application programs written for the popular SAP business system or between an SAP application and an external program. IDocs serve as the vehicle for data transfer in SAP's Application Link Enabling (ALE) system. IDocs are used for asynchronous transactions: each IDoc generated exists as a self-contained text file that can then be transmitted to the requesting workstation without connecting to the central database. Another SAP mechanism, the Business Application Programming Interface (BAPI) is used for synchronous transactions.
    Form and content: IDoc terminology
    As is often the case with proprietary technologies, SAP assigns specific, object-oriented meanings to familiar terms. When referring to IDocs, the term document refers to a set of data comprising a functional group of records with a business identity. (For example, all the data in a purchase order, or all the profile information of a supplier in a supplier master record.)
    A message refers to the contents of a specific implementation of an IDoc; it’s a logical reference. This differs from a reference to the IDoc itself, which specifies the message’s physical representation. Think of it this way: If you’re watching a parade pass by, the mayor waving to the crowd from his limousine is the message, and the mayor’s limousine (which is specific to the mayor) is the IDoc. You’re building a logical object, and the IDoc is both its container and the vehicle that moves it.
    The IDoc control record
    Each IDoc has a single control record, always the first record in the set. The structure of this record describes the content of the data records that will follow and provides administrative information, such as the IDoc’s functional category (Message Type/IDoc Type), as well as its origin (Sender) and destination (Receiver) as in conventional EDI
    Layout of an IDoc control record
    This “cover slip” format informs the receiving system of the IDoc’s purpose, and the receiving system uses it to determine the correct processing algorithm for handling the arriving IDoc.
    Data records
    The data records following the control record are structured alike, including two sections: a segment information section and a data section.
    In the first part of a data record, the segment information section describes the structure of the data that follows, for the benefit of the IDoc processor. There is a segment name (like an EDI segment identifier) that corresponds to a data dictionary structure to which the IDoc processor has access. The remaining information is useful for foreign systems, such as a partner company’s Oracle system, which has no such data dictionary.
    The second part of the record is the data itself, a storage area of 1,000 characters.
    Status records
    If you’ve ever ordered a package from a faraway location and tracked its progress using the Internet-based tracking utilities now provided by most major parcel carriers, you’re familiar with the list of stops and transfer points through which a package passes on its way to you.
    This collection of records is exactly what you’ll see in an IDoc that has begun its work. Following the data records in an IDoc, status records accumulate as an IDoc makes its way from one point in a process to another.
    Typically, an IDoc will acquire several of these records as its does its job. They are simple records, consisting of a status code (there are more than 70 codes, covering a broad range of conditions and errors), a date/time stamp, and some additional status information fields for system audit purposes. In addition, as errors occur in the processing of an IDoc, status records are used to record these errors and the date/time of their occurrence.
    IDoc Base
    IDocs, as data formatting tools, enable the easy sharing of data between databases and applications within a company as well as being an efficient data courier between companies. Typically in SAP, a database of IDoc definitions exists, to which any application may have access.
    This “IDoc Base” gives all the applications and processes in your company domain the capacity to send, receive, and process a document in a contextually appropriate way, without doing anything to the data. For example, a purchase order IDoc can filter through every process it touches, passing from system to system, accumulating status records to track its progress.
    Every department using the data can use it appropriately without any cumbersome intermediate processes, because each department draws its key to interpreting the IDoc from the same source.
    Multiple messages
    One cumbersome feature of conventional EDI is the embedding of more than one functional record type in a document. The unwieldy X-12 888 Item Maintenance transaction set is an example: It purports to handle so many different configurations of product master data that it is horrifically difficult to integrate into an existing system.
    IDocs, on the other hand, handle multiple messages with ease. Given the centralized IDoc interpretation that SAP provides to all its parts, it’s no problem to define an IDoc that will contain more than one message, that is, more than one data record type.
    A customer master IDoc, for example, may contain customer profile information records for a customer with many locations. But it may also contain location-specific pricing information records for that customer in the same document. This is an incredibly efficient way of bundling related records, particularly when passing large amounts of complex information from system to system Records in a multiple-message IDoc

Maybe you are looking for

  • SSL Private Key

    Hi, I would like to export my Portal private key, so that it can be used for network traffic capture (Wire shark). Can anyone point me in the direction as to where this file can be exported. Thanks Kai PS. Points will be awarded.....

  • How do I change the highlight color?

    When I'm working on my document in Pages, I often do a Search (command-F) to quickly find a word or phrase that appears somewhere else in the document. It works great, but it highlights the word/phrase in light gray, which I have trouble seeing. Is t

  • Problems with Update Statement

    How Can I combine these three statements into one single statement.. Heard that I can use Decode command over here... But dont know whether that was correct or not.. Appreciate ur help.. UPDATE LOTIDS_FINAL SET TEAM = 'MLA' WHERE SUBSTR( EXPT,1) = 'A

  • Only allowing certain photos to be sent in SMS

    Hello, I have only had my Xperia Z3 Compact for a few days, but I have taken several photos on it. When I try and send a photo through a text message it only shows certain ones which will send. Any other photo I choose pops up with "photo could not b

  • Why has it stopped letting me highlight text

    For some reason my powerbook g4 has stopped letting me highlight text with the mouse completely does anybody know why or even how to fix this as i'm at a complete loss...... i know its trivial but i'm a mac newbie after decades of microsoft brain-was