What's the best way to sync the time between FP and host?

I have tried with Set Date and Time (Client).VI and Write Time to FP RT Module (Server).VI in order to synchronize the time, but I have got different behaviours running this VIs. Is there any recommended way to do it?

The easiest way would be set the host to be the time server of the FP module. The host would have to be a static IP address and running the Logos Time Server. Otherwise if you have LabVIEW RT 7.0 there is a VI in the RT>>Utilities palette which allows you to programmatically set the time of the FP module.
Regards,
JR A.
Application Engineer
National Instruments

Similar Messages

  • What are the Relations between Journalizing and IKM?

    What is the best method to use in the following scenario:
    I have about 20 source tables with large amount of data.
    I need to create interfaces that join the source tables into target tables.
    The source tables are inserted every few secondes with about hundreds to thousands rows.
    There can be a gap of few seconds between the insert of different tables that sould be joined.
    The source and target tables are on the same Oracle instance and schema.
    I want to understand the role of: 'Journalizing CDC' and 'IKM - Incremental Update' and
    how can i use it in my scenario?
    In general What are the relations between 'Journalizing' and 'IKM'?
    Should i use both of them? Or maybe it is better to deelte and insert to the target tables?
    I want to understand what is the role of 'Journalizing CDC'?
    Can 'IKM - Incremental Update' work without 'Journalizing'?
    Does 'Journalizing' need to have PK on the tables?
    What should i do if i can't put PK (there can be multiple identical rows)?
    Thanks in advance Yael

    Hi Yael,
    I will try and answer as many of your points as I can in one post :-)
    Journalizing is way of tracking only changed data in your source system, if your source tables had a date_modified you could always use this as a filter when scanning for changes rather than CDC, Log based CDC (Asynchronous in ODI, Logminer/Streams or Goldengate for example) removes the overhead of of placing a trigger on the source table to track changes but be aware that it doesnt fully remove the need to scan the source tables, in answer to you question about Primary keys, Oracle CDC with ODI will create an unconditional log group on the columns that you have defined in ODI as your PK, the PK columns are tracked by the database and presented in a Journal table (J$<source_table_name>) this Journal table is joined back to source table via a journalizing view (JV$<source_table_name>) to get the rest of the row (ie none PK columns) - So be aware that when ODI comes around to get all data in the Journalizing view (ie Inserts, Updates and Deletes) the source database performs a join back to the source table. You can negate this by specifying ALL source table columns in your PK in ODI - This forces all columns into the unconditional log group, the journal table etc. - You will need to tweak the JKM to then change the syntax sent to the database when starting the journal - I have done this in the past, using a flexfield in the datastore to toggle 'Full Column' / 'Primary Key Cols' to go into the JKM set up (there are a few Ebusiness suite tables with no primary key so we had to do this) - The only problem with this approach is that with no PK , you need to make sure you only get the 'last' update and in the right order to apply to your target tables, without so , you might process the update before the insert for example, and be out of sync.
    So JKM's provide a mechanism for 'Change data only' to be provided to ODI, if you want to handle deletes in your source table CDC is usefull (otherwise you dont capture the delete with a normal LKM / IKM set up)
    IKM Incremental update can be used with or without JKM's, its for integrating data into your target table, typically it will do a NOT EXISTS or a Minus when loading the integration table (I$<target_table_name>) to ensure you only get 'Changed' rows on the load into the target.
    user604062 wrote:
    I want to understand the role of: 'Journalizing CDC' and 'IKM - Incremental Update' and
    how can i use it in my scenario?Hopefully I have explained it above, its the type of thing you really need to play around with, and throroughly review the operator logs to see what is actually going on (I think this is a very good guide to setting it up : http://soainfrastructure.blogspot.ie/2009/02/setting-up-oracle-data-integrator-odi.html)
    In general What are the relations between 'Journalizing' and 'IKM'?JKM simply presents (only) changed data to ODI, it removes the need for you to decide 'how' to get the updates and removes the need for costly scans on the source table (full source to target table comparisons, scanning for updates based on last update date etc)
    Should i use both of them? Or maybe it is better to deelte and insert to the target tables?Delete and insert into target is fine , but ask yourself how do you identify which rows to process, inserts and updates are generally OK , to spot a delete you need to compare the table in full, target table minus source table = deleted rows , do you want to copy the whole source table every time to perform this ? Are they in the same database ?
    I want to understand what is the role of 'Journalizing CDC'?Its the ODI mechanism for configuring, starting, stopping the change data capture process in the source systems , there are different KM's for seperate technologies and a few to choose for Oracle (Triggers (Synchronous), Streams / Logminer (Asynchronous), Goldengate etc)
    Can 'IKM - Incremental Update' work without 'Journalizing'?Yes of course, Without CDC your process would look something like :
    Source target ----< LKM >---- Collection table (C$) ----<IKM>---- Integration table (I$) -----< IKM >---- Target table
    With CDC your process looks like :
    Source Journal (J$ table with JV$ view) ----< LKM >---- Collection table (C$) ----<IKM>---- Integration table (I$) -----< IKM >---- Target table
    as you can see its the same process after the source table (there is an option in the interface to enable the J$ source , the IKM step changes with CDC as you can use 'Synchronise Journal Deletes'
    Does 'Journalizing' need to have PK on the tables?Yes - at least a logical PK in the datastore, see my reply at the top for reasons why (Log Groups, joining back the J$ table to the source table etc)
    What should i do if i can't put PK (there can be multiple identical rows)? Either talk to the source system people about adding one, or be prepared to change the JKM (and maybe LKM, IKM's) , you can try putting all columns in the PK in ODI. Ask yourself this , if you have 10 identical rows in your source and target tables, and one row gets updated - how can you identify which row in the target table to update ?
    >
    Thanks in advance YaelA lot to take in, as I advised I would reccomend you get a little test area set up and also read the Oracle database documentation on CDC as it covers a lot of the theory that ODI is simply implementing.
    Hope this helps!
    Alastair

  • What is the diffrence between OCI and OCCI?

    What is the diffrence between OCI and OCCI?

    Will Lee wrote:
    What is the diffrence between OCI and OCCI?Beside the other answers, there are a few additional points to consider:
    1) OCI is the "gold" standard API. New stuff is always available in OCI first, and only later trickles down to other APIs, like OCCI.
    2) OCI is a low-level API, harder to get started with, than OCCI. APIs in OCI are often "untyped", taking a void*, which opens the door for errors.
    3) In OCCI you set values, while in OCI you bind them. So OCCI takes a copy of your values, while OCI takes an address at which to later read the value. This opens the door to subtle bugs where you pass the address of a temporary in OCI, which later crashes in some mysterious ways. So OCCI is way safer in this regard.
    4) OCI is C code, which is very portable. Because OCCI is C++ code, and on Windows you can't easily mix and match libraries compiled with different versions of Visual C++ (VC6, 7, 8, 9), you have to wait for Oracle to make a new build with the latest MS compiler. Just see the number of questions on this OCI forum and the OCCI one.
    5) OCI is used internally by Oracle to write many of their own tools, it's the lingua franca between the Core DB group and the other groups. Since they use it themselves, it's much more stable that OCCI, which is mostly only used by outside customers.
    6) The way SQL objects are dealt with in OCI and OCCI is fundamentally different, to the point where you can't mix and match OCCI and OCI object calls.
    #1 above is one reason we had to abandon using OCCI, lacked support for the new in 11g BinaryXML, but that's just one example.
    IMHO OCI is the way to go, if you want the latest and greatest. Yes, it's more difficult to code against, so the learning curve is steeper, but once you've reached critical mass it's just fine. If you write code in C++ as opposed to C, you can easily make it a lot safer with a thin C++ layer on top which, unlike OCCI, still allows you access any OCI raw handle to do stuff the wrappers don't expose. My $0.02 ;-) --DD                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • What is the difference between #variable_name and :variable_name?

    Hi!
    What is the difference between #variable_name and :variable_name?
    I have found that if we use alphanumeric variable then :variable_name return value in quotes but #variable_name without quotes.
    Why it does not work in the same way for variable default values when variable is used in filter? (It works in mapping)
    I use variable in filter like T.OUT_DATE>convert(datetime,:LAST_UPDATE_DATE,121)
    When I use my variable in package and do refresh it works fine. But when I try to execute the same interface with variable default value I get error. Seems that variable name has been not changed to the value. It does not work with default value in quotes neither without quotes.
    Any ideas how to solve that?
    Thank you in advance!
    Edited by: user13278245 on Sep 15, 2010 4:34 AM

    Question is how to make it work with default value, when I execute interface standalone, not in package? And why it works in mapping but not in filter?
    + I have found that it works if source is Oracle. It doesn't work only for MS SQL source.
    Edited by: user13278245 on Sep 15, 2010 6:43 AM

  • What is the difference between, DSO and DTP in BI 7.0

    Hi Guru's
    what is the difference between, DSO and DTP in BI 7.0 , how it will come the data from r/3 to BI 7.0, can u discribe?
    points will be assined?
    Thanks & Regards,
    Reddy.

    Hi,
    The data will be replicated in the same way as we do in 3.5.
    Activating, and Transporting the same DS in BW, and Replicating them in BW from R/3.
    First you need to know Diff b/w 3.5 nd 7.0, for that check the below doc's:
    http://help.sap.com/saphelp_nw04s/helpdata/en/a4/1be541f321c717e10000000a155106/content.htm
    blogs:
    /people/sap.user72/blog/2004/11/01/sap-bi-versus-sap-bw-what146s-in-a-name
    Re: How to identify Header, Item and Schedule item level data sources?
    For Transformations in BI:
    http://help.sap.com/saphelp_nw70/helpdata/en/33/045741c0c28447e10000000a1550b0/frameset.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/f8/7913426e48db2ce10000000a1550b0/frameset.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/a9/497f42d540d665e10000000a155106/frameset.htm
    For DTP:
    DTP:
    http://help.sap.com/saphelp_nw70/helpdata/en/20/a894ed07e75648ba5cf7c876430589/frameset.htm
    For DSO:
    Data Store Objects:
    http://help.sap.com/saphelp_nw70/helpdata/en/f9/45503c242b4a67e10000000a114084/frameset.htm
    Reg
    Pra

  • What's the Difference Between OLAP and OLTP?

    HI,
    What's the difference between OLAP and OLTP ? and which one is Best?
    -Arun.M.D

    Hi,
       The big difference when designing for OLAP versus OLTP is rooted in the basics of how the tables are going to be used. I'll discuss OLTP versus OLAP in context to the design of dimensional data warehouses. However, keep in mind there are more architectural components that make up a mature, best practices data warehouse than just the dimensional data warehouse.
    Corporate Information Factory, 2nd Edition by W. H. Inmon, Claudia Imhoff, Ryan Sousa
    Building the Data Warehouse, 2nd Edition by W. H. Inmon
    With OLTP, the tables are designed to facilitate fast inserting, updating and deleting rows of information with each logical unit of work. The database design is highly normalized. Usually and at least to 3NF. Each logical unit of work in an online application will have a relatively small scope with regard to the number of tables that are referenced and/or updated. Also the online application itself handles the majority of the work for joining data to facilitate the screen functions. This means the user doesn't have to worry about traversing across large data relationship paths. A heavy dose of lookup/reference tables and much focus on referential integrity between foreign keys. The physical design of the database needs to take into considerations the need for inserting rows when deciding on physical space settings. A good book for getting a solid base understanding of modeling for OLTP is The Data Modeling Handbook: A Best-Practice Approach to Building Quality Data Models by Michael C. Reingruber, William W. Gregory.
    Example: Let's say we have a purchase oder management system. We need to be able to take orders for our customers, and we need to be able to sell many items on each order. We need to capture the store that sold the item, the customer that bought the item (and where we need to ship things and where to bill) and we need to make sure that we pull from the valid store_items to get the correct item number, description and price. Our OLTP data model will contain a CUSTOMER_MASTER, A CUSTOMER_ADDRESS_MASTER, A STORE_MASTER, AN ITEM_MASTER, AN ITEM_PRICE_MASTER, A PURCHASE_ORDER_MASTER AND A PURCHASE_ORDER_LINE_ITEM table. Then we might have a series of M:M relationships for example. An ITEM might have a different price for specific time periods for specific stores.
    With OLAP, the tables are designed to facilitate easy access to information. Today's OLAP tools make the job of developing a query very easy. However, you still want to minimize the extensiveness of the relational model in an OLAP application. Users don't have the wills and means to learn how to work through a complex maze of table relationships. So you'll design your tables with a high degree of denormalization. The most prevalent design scheme for OLAP is the Star-Schema, popularized by Ralph Kimball. The star schema has a FACT table that contains the elements of data that are used arithmatically (counting, summing, averaging, etc.) The FACT Table is surrounded by lookup tables called Dimensions. Each Dimension table provides a reference to those things that you want to analyze by. A good book to understand how to design OLAP solutions is The Data Warehouse Toolkit: Practical Techniques for Building Dimensional Data Warehouses by Ralph Kimball.
    Example: let's say we want to see some key measures about purchases. We want to know how many items and the sales amount that are purchased by what kind of customer across which stores. The FACT table will contain a column for Qty-purchased and Purchase Amount. The DIMENSION tables will include the ITEM_DESC (contains the item_id & Description), the CUSTOMER_TYPE, the STORE (Store_id & store name), and TIME (contains calendar information such as the date, the month_end_date, quarter_end_date, day_of_week, etc).
      Database Fundamentals > Data Warehousing and Business Intelligence with Mike Lampa
    Search Advice from more than 250 TechTarget Experts
    Your question may have already been answered! Browse or search more than 25,000 question and answer pairs from more than 250 TechTarget industry experts.

  • What's the difference between "overloading" and "overriding" in Java

    What's the difference between "overloading" and "overriding" in Java

    hashdata wrote:
    What is the real-time usage of these concepts...?Overriding is used when two classes react differently to the same method call. A good example is toString(). For Object it just returns the class name and the identityHashCode, for String it returns the String itself and for a List it (usually) returns a String representation of the content of the list.
    Overloading is used when similar functionality is provided for different arguments. A good example is [Arrays.sort()|http://java.sun.com/javase/6/docs/api/java/util/Arrays.html#sort(byte%5B%5D)]: all the sort() methods do the same thing (they sort arrays), but one sorts byte-arrays, another one sorts int-arrays, yet another one sorts Object-arrays.
    By the way, you almost certainly mean "real-world" usage. "real-time" (and thus "real-time usage) means something entirely unrelated to your question.

  • What are the differences between Essbase and Planning?

    What are the differences between Essbase and Planning?

    Planning is an enterprise application built around the Essbase OLAP engine.
    You can create planning applications with Essbase only, but Planning uses best practises and has built-in enterprise features.
    Brian Chow

  • What is the difference between  ABAP and HR-ABAP?

    Hi people,
    Could u just tel me abt what is the difference between ABAP and HR-ABAP?
    Thanks in advance,
    Sanjeev K.V

    Hi Sir ,
    Please have a look below .Hope it is suitable and simpler solution for your question.
    Please do reward if useful.
    Thankx.
    HR deals with the INFOTYPES which are similar to Tables in General ABAP.
    There are different ways of fetching data from these infotypes.
    There are different areas in HR LIKE Personal Admn, Orgn Management, Benefits, Time amangement, Event Management, Payroll etc
    Infotypes for these areas are different from one another area.
    storing of records data in each type of area is different
    LDBS like PNP are used in HR programing.
    Instead of Select.. we use some ROUTINES and PROVIDE..ENDPROVIDE.. etc
    and in the case of Pay roll we use Clusters and we Import and Export them for data fetching.
    On the whole Normal ABAP is different from HR abap.
    Also,
    HR:
    HR deals with the INFOTYPES which are similar to Tables in General ABAP.
    There are different ways of fetching data from these infotypes.
    There are different areas in HR LIKE Personal Admn, Orgn Management, Benefits, Time amangement, Event Management, Payroll etc
    Infotypes for these areas are different from one another area.
    storing of records data in each type of area is different
    LDBS like PNP are used in HR programing.
    Instead of Select.. we use some ROUTINES and PROVIDE..ENDPROVIDE.. etc
    and in the case of Pay roll we use Clusters and we Import and Export them for data fetching.
    On the whole Normal ABAP is different from HR abap.
    For Personal Admn the Infotypes start with PA0000 to PA1999
    Time Related Infotypes start with PA2000 to PA2999.
    Orgn related Infotypes start with HRP1000 to HRP1999.
    All custom developed infotypes stsrat with PA9000 onwards.
    In payroll processing we use Clusters like PCL1,2,3 and 4.
    Instead of Select query we use PROVIDE and ENDPROVIDE..
    You have to assign a Logical Database in the attributes PNP.
    Go through the SAp doc for HR programming and start doing.
    http://www.sapdevelopment.co.uk/hr/hrhome.htm
    See:
    http://help.sap.com/saphelp_46c/helpdata/en/4f/d5268a575e11d189270000e8322f96/content.htm
    sites regarding hr-abap:
    http://www.sapdevelopment.co.uk/hr/hrhome.htm
    http://help.sap.com/printdocu/core/Print46c/en/data/pdf/PAPA/PAPA.pdf
    http://help.sap.com/printdocu/core/Print46c/en/data/pdf/PAPD/PAPD.pdf
    http://help.sap.com/printdocu/core/Print46c/en/data/pdf/PYINT/PYINT_BASICS.pdf
    http://www.atomhr.com/training/Technical_Topics_in_HR.htm
    http://www.planetsap.com/hr_abap_main_page.htm
    You can see some Standard Program examples in this one ...
    http://www.sapdevelopment.co.uk/programs/programshr.htm
    http://searchsap.techtarget.com/originalContent/0,289142,sid21_gci1030179,00.html?Offer=SAlgwn12604#Certification
    http://www.erpgenie.com/faq/hr.htm.
    http://www.planetsap.com/hr_abap_main_page.htm
    http://www.sapbrain.com/TUTORIALS/FUNCTIONAL/HR_tutorial.html
    These are the FAQ's that might helps you as well.
    http://www.sap-img.com/human/hr-faq.htm
    http://www.sapgenie.com/faq/hr.htm
    http://www.planetsap.com/hr_abap_main_page.htm
    http://www.atomhr.com/library_full.htm
    HR Long texts Upload
    Look at the below link
    And finally,
    Few notes are below:
    InfoSets in the HR Application
    You can use SAP Query in HR to report on HR data. Queries are maintained as described in Creating Queries. The special features of queries created for HR are described in Maintaining Queries in the Human Resources Application. The maintenance procedure for HR InfoSets differs from the described procedure inasmuch as HR data fields are grouped together in infotypes.
    InfoSet management in SAP Query is also used for InfoSet Query. For further information, see Functions for Managing InfoSets.
    If you want to create InfoSets for HR, you can use logical databases PNP, PNPCE, PAP, and PCH (see HR Logical Databases). The database you must use to create your InfoSet depends on the component in which the data you want to report on is stored.
    The reports you can execute using InfoSets based on logical databases PNP (or PNPCE) or PCH are similar, but differ in that they can select different objects. The following table describes the connection between the logical database, and the infotypes you can include in an InfoSet. It also provides you with one or two examples of reports that you can execute using the appropriate InfoSets.
    Logical database PNP/PNPCE* PCH PAP
    Selection of Persons Objects from Personnel Planning Applicants
    Infotypes that can be included in the InfoSet Infotypes for· Personnel Administration (0000-0999) · Time Management (2000-2999) · Payroll infotypes · Infotypes for Personnel Planning objects that can be related to persons If the object type is specified:· Infotypes for the object type · Infotypes for objects that can be related to the specified object typeIf the object type is not specified:· All infotypes · Infotypes for Recruitment (4000-4999)· Some infotypes for Personnel Administration (such as 0001 and 0002)
    · Customer infotypes
    Reporting examples · Selection of all persons who participated in a specific business event, output of prices for reserved business events · Selection of all persons assigned to a specific personnel area, output of qualifications held by these persons · Selection of all business events held in London in March, output of all persons who participated in these business events · Selection of all positions assigned to a specific organizational unit, output of all persons assigned to the positions · Selection of all applicants hired last year to work on special projects, output of addresses for the applicants selected
    Logical database PNPCE (PNP Concurrent Employment) functions just like logical database PNP. The procedure for creating InfoSets is also the same. It only becomes significant if you work with Concurrent Employment.
    Creating InfoSets
    The maintenance procedure for HR InfoSets differs from the procedure described so far in this section inasmuch as HR data fields are grouped together in infotypes. To set up an InfoSet for the HR application, proceed as follows:
    1. On the initial screen for maintaining InfoSets, enter a name for the InfoSet and choose Create.
    2. On the next screen, enter a name for the InfoSet and select one of the HR logical databases in accordance with your reporting requirements.
    Customer infotypes can be created on all HR logical databases. In each individual case, therefore, you must decide which database to select so that you can report on customer infotypes.
    This screen enables you to enter an authorization group. All of the queries that are subsequently created using this InfoSet can only be executed by persons who have this authorization group.
    3. Choose .
    This takes you to the Infotype Selection for InfoSet  screen. You now have the option of creating field groups and assigning fields as required for non-HR InfoSets. Field groups that correspond to infotypes and already contain fields, however, are always created for HR InfoSets. The field groups are displayed in an overview tree in the top right section of the screen.
    The infotypes that you included in the InfoSet are displayed in an overview tree on the left of the screen. The infotype fields that are already included in field groups are displayed in a different color, and the corresponding field group ID is displayed.
    In the standard system, a field group is created automatically for each infotype that you included in the InfoSet (a field group corresponds to an infotype).
    In the standard system, each field group contains the infotype-specific fields. To ensure that working with the InfoSet is as easy as possible, you are advised to restrict your use of fields in each field group to those you really require. This means you should remove fields that are not required.
    An infotype's fields must only be assigned to the pertinent field group. Make sure this assignment is correct. If the assignment is incorrect, the InfoSet could be rendered unusable.
    When an InfoSet is created, the following fields are transferred automatically to the first field group:
    § Logical database PNPCE or PNP Personnel number
    § Logical database PAP Applicant number
    § Logical database PCH Object ID, plan version, and object type
    6. Determine the fields that must be included in the field groups of your InfoSet. If you require further information, see Assigning Fields to a Field Group.
    If you want, you can change the default sequence of field groups and fields as required using Drag&Drop.
    7. To save the InfoSet, choose .
    8. To generate the InfoSet, choose .
    On the Change InfoSet (InfoSet name) screen, you can choose Edit ® Change infotype selection to add more infotypes to the InfoSet, or to remove infotypes from the InfoSet. Remember to regenerate the InfoSet afterwards.
    This screen also enables you to update InfoSets if, for example, the system contains new additional fields for specific key values. To do so, choose InfoSet ® Additional functions ® Update additional HR fields.
    9. Go back to the initial screen for InfoSet maintenance.
    10. Choose User group assignment.
    11. Select a user group, and save your entry.
    sample code
    START-OF-SELECTION.
    GET pernr.
    rp_provide_from_frst p0000 space pn-begda pn-endda.
    if pnp-sw-found EQ '1'.
    READ TABLE p0001 WITH KEY pernr = p0000-pernr.
    if sy-subrc = 0.
    write : p0001-plans. " earliest.
    endif.
    endif.
    rp_provide_from_last p0014 space pn-begda pn-endda.
    if pnp-sw-found EQ '1'.
    READ TABLE p0014 WITH KEY pernr = p0000-pernr.
    if sy-subrc = 0.
    write : p0014-LGART. .
    endif.
    endif.

  • Hi guru's what is the difference between table and temlate in smartforms

    hi guru's what is the difference between table and temlate in smartforms

    Hi Vasu,
    Template is used for proper allignment of data which table is used for displaying multiple data.
    We can say Template is for static data and Table is for dynamic data.
    Suppose we have a requirement in which we have to allign the customer address in such a way as shown below:-
    Name- Vasu Company- WIPRO Location- Chennai
    Desig- S/W Native - Mumbai
    Then for proper allighnment we can create a template and split that into 3 columns and 2 rows and create text elements for each cell display a proper allighned data at the output.
    When we include a template inside a loop it gives the same property as a table.
    When we have mutiple data which is to be extended to the next page like when we display all employee details in a company we use table.
    Table has 3 sections , HEADER, ITEM ,FOOTER
    The header secntion will be executed once and it will loop at the item level. at the end footer will be executed.
    Hope this gives u some idea..
    <b>Please reward if useful</b>
    Regards,
    sunil kairam.

  • What is the Differences between Caingorm2 and Parsley(Caingorm3) ? Very Urgent ...plz help me out..

    Hi all,
            I am familier with caingorm 2 , and i am new to parsley , can any one give  differences between caingorm2 and parsley(caingorm3) ?
    and also please
    1)how to create a BeanConfig.mxml configuration  file in parsly ? how many ways we can inject beans in BeanConfig.mxml
    2)and how the event dispatched in parsly and handled by parsley step-by step?
    3)please explain by taking a small example insert usename and password in to data base using LCDS ?
    thanks
    -Balu

    Hi
    You can refer the following links for your question.
    Difference between AET and EEWB
    What is the use of AET? What are the differences between AET and EEWB?
    Difference between EEWB - UI Configuration Tool - AET
    http://senthilsapcrm.wordpress.com/2010/02/04/adding-custom-fields-in-sap-crm-7-0-using-aet/
    What is the main difference between eewb and aet tool ?
    Hope it is useful.
    Thanks and regards
    Preeti Viswanath

  • What are the differences between inactive and active ABAP objects?

    Can anybody tell me what are the differences between inactive and active ABAP objects?
    In my opinion,  an active object is compiled and system wide available, that means the system do not have to compile the program again before run or use the object. While An inactive object is not system wide available and every time you run an inactive object, firstly the abap runtime will have to  generate a tempory runtime object and this inactive object can not seen by others.
    Am I right? Can anybody kindly tell me other differences?

    Hi,
    "When it is inactive, it is like it would not exist at all:" no - it's like it only exists to you
    "If we just saved that one means it is stored in application server not in database": no - the inactive version is also stored in the database. You can log off and log on and it will still be there, in its inactive status.
    "Only active objects can be executed.": no - inactive objects can be executed by you
    When you create or modify a program, it is inactive until you activate it.
    With a change, there are two versions of the program stored in the database - the active version (as it was before you made your change), and the inactive version. If you attempt to run the program, you'll run the inactive version - the one with your changes. Everyone else on the system will run the active version.
    In this way, you can make changes without affecting anyone else.
    Once you activate your program, then the inactive version becomes the active version.
    With a create, there is no active version, until you hit the activate button. This means ONLY you can run the program.
    An additional benefit of this model, is that if you make a change, save it, and then change your mind without activating, you can recover the active version into the editor, using version management.
    A downside is that sometimes you have to activate your change before you can test it, if it interacts with other, active, programs.
    Regards,
    Kumar

  • What is the difference between tkprof and explainplan

    Hi,
    what is the difference between tkprof and explainplan.

    Execution Plans and the EXPLAIN PLAN Statement
    Before the database server can execute a SQL statement, Oracle must first parse the statement and develop an execution plan. The execution plan is a task list of sorts that decomposes a potentially complex SQL operation into a series of basic data access operations. For example, a query against the dept table might have an execution plan that consists of an index lookup on the deptno index, followed by a table access by ROWID.
    The EXPLAIN PLAN statement allows you to submit a SQL statement to Oracle and have the database prepare the execution plan for the statement without actually executing it. The execution plan is made available to you in the form of rows inserted into a special table called a plan table. You may query the rows in the plan table using ordinary SELECT statements in order to see the steps of the execution plan for the statement you explained. You may keep multiple execution plans in the plan table by assigning each a unique statement_id. Or you may choose to delete the rows from the plan table after you are finished looking at the execution plan. You can also roll back an EXPLAIN PLAN statement in order to remove the execution plan from the plan table.
    The EXPLAIN PLAN statement runs very quickly, even if the statement being explained is a query that might run for hours. This is because the statement is simply parsed and its execution plan saved into the plan table. The actual statement is never executed by EXPLAIN PLAN. Along these same lines, if the statement being explained includes bind variables, the variables never need to actually be bound. The values that would be bound are not relevant since the statement is not actually executed.
    You don’t need any special system privileges in order to use the EXPLAIN PLAN statement. However, you do need to have INSERT privileges on the plan table, and you must have sufficient privileges to execute the statement you are trying to explain. The one difference is that in order to explain a statement that involves views, you must have privileges on all of the tables that make up the view. If you don’t, you’ll get an “ORA-01039: insufficient privileges on underlying objects of the view” error.
    The columns that make up the plan table are as follows:
    Name Null? Type
    STATEMENT_ID VARCHAR2(30)
    TIMESTAMP DATE
    REMARKS VARCHAR2(80)
    OPERATION VARCHAR2(30)
    OPTIONS VARCHAR2(30)
    OBJECT_NODE VARCHAR2(128)
    OBJECT_OWNER VARCHAR2(30)
    OBJECT_NAME VARCHAR2(30)
    OBJECT_INSTANCE NUMBER(38)
    OBJECT_TYPE VARCHAR2(30)
    OPTIMIZER VARCHAR2(255)
    SEARCH_COLUMNS NUMBER
    ID NUMBER(38)
    PARENT_ID NUMBER(38)
    POSITION NUMBER(38)
    COST NUMBER(38)
    CARDINALITY NUMBER(38)
    BYTES NUMBER(38)
    OTHER_TAG VARCHAR2(255)
    PARTITION_START VARCHAR2(255)
    PARTITION_STOP VARCHAR2(255)
    PARTITION_ID NUMBER(38)
    OTHER LONG
    DISTRIBUTION VARCHAR2(30)
    There are other ways to view execution plans besides issuing the EXPLAIN PLAN statement and querying the plan table. SQL*Plus can automatically display an execution plan after each statement is executed. Also, there are many GUI tools available that allow you to click on a SQL statement in the shared pool and view its execution plan. In addition, TKPROF can optionally include execution plans in its reports as well.
    Trace Files and the TKPROF Utility
    TKPROF is a utility that you invoke at the operating system level in order to analyze SQL trace files and generate reports that present the trace information in a readable form. Although the details of how you invoke TKPROF vary from one platform to the next, Oracle Corporation provides TKPROF with all releases of the database and the basic functionality is the same on all platforms.
    The term trace file may be a bit confusing. More recent releases of the database offer a product called Oracle Trace Collection Services. Also, Net8 is capable of generating trace files. SQL trace files are entirely different. SQL trace is a facility that you enable or disable for individual database sessions or for the entire instance as a whole. When SQL trace is enabled for a database session, the Oracle server process handling that session writes detailed information about all database calls and operations to a trace file. Special database events may be set in order to cause Oracle to write even more specific information—such as the values of bind variables—into the trace file.
    SQL trace files are text files that, strictly speaking, are human readable. However, they are extremely verbose, repetitive, and cryptic. For example, if an application opens a cursor and fetches 1000 rows from the cursor one row at a time, there will be over 1000 separate entries in the trace file.
    TKPROF is a program that you invoke at the operating system command prompt in order to reformat the trace file into a format that is much easier to comprehend. Each SQL statement is displayed in the report, along with counts of how many times it was parsed, executed, and fetched. CPU time, elapsed time, logical reads, physical reads, and rows processed are also reported, along with information about recursion level and misses in the library cache. TKPROF can also optionally include the execution plan for each SQL statement in the report, along with counts of how many rows were processed at each step of the execution plan.
    The SQL statements can be listed in a TKPROF report in the order of how much resource they used, if desired. Also, recursive SQL statements issued by the SYS user to manage the data dictionary can be included or excluded, and TKPROF can write SQL statements from the traced session into a spool file.
    How EXPLAIN PLAN and TKPROF Aid in the Application Tuning Process
    EXPLAIN PLAN and TKPROF are valuable tools in the tuning process. Tuning at the application level typically yields the most dramatic results, and these two tools can help with the tuning in many different ways.
    EXPLAIN PLAN and TKPROF allow you to proactively tune an application while it is in development. It is relatively easy to enable SQL trace, run an application in a test environment, run TKPROF on the trace file, and review the output to determine if application or schema changes are called for. EXPLAIN PLAN is handy for evaluating individual SQL statements.
    By reviewing execution plans, you can also validate the scalability of an application. If the database operations are dependent upon full table scans of tables that could grow quite large, then there may be scalability problems ahead. On the other hand, if large tables are accessed via selective indexes, then scalability may not be a problem.
    EXPLAIN PLAN and TKPROF may also be used in an existing production environment in order to zero in on resource intensive operations and get insights into how the code may be optimized. TKPROF can further be used to quantify the resources required by specific database operations or application functions.
    EXPLAIN PLAN is also handy for estimating resource requirements in advance. Suppose you have an ad hoc reporting request against a very large database. Running queries through EXPLAIN PLAN will let you determine in advance if the queries are feasible or if they will be resource intensive and will take unacceptably long to run.

  • What is the difference between UPLOAD and WS_UPLOAD?

    Hi,
    What is the difference between UPLOAD and WS_UPLOAD?
    Best Regards,
    Gopal

    Hi,
    Both upload and ws_upload does the same functionality, that is transfere data from presentation server to application server or from PC to SAP system(to an internal table).
    There are very few difference between the two.
    1 Upload requires a User Interaction for uploading i.e., user has to respond to the dialog boxes that appear WS_Upload does not. You just need to specify the file location in the function input parameters itself.
    2 upload - u can give the file in run time wsupload - u have to give in Function module
    3 upload is meant to be used by abappers. ws_upload is meant to be called by SAP. It is not a standard ABAP command.
    4 ws_upload is Obsolete : No longer supported by SAP. Use GUI_UPLOAD instead.
    The fm WS_UPLOAD was the first version of fm to get a file from presentation server, now from 4.6 it's obsolete, the new fm is GUI_UPLOAD.
    The fm UPLOAD is the fm used in several applications, like ABAP editor, to upload the file.
    In the some versions (it depends on patch level) UPLOAD can call WS_UPLOAD (or GUI_UPLOAD).
    Last but not the least, 'upload' internally calls 'ws_upload', that's the difference.
    Hope this would be helpful.
    regards,
    Varun.

  • What is the difference between jdk  and jre

    what is the difference between jdk and jre
    plzz tell me in detail

    It's an extremely important skill to learn how to search the web. Not only will it increase your research and development talents, it will also save you from asking questions that have already been answered numerous times before. By doing a little research before you ask a question, you'll show that you're willing to work and learn without needing to have your hand held the entire time; a quality that is seemingly rare but much appreciated by the volunteers who are willing to help you.
    If you've done the research, found nothing useful, and decide to post your question, it's a great idea to tell us that you've already searched (and what methodologies you used to do your research). That way, we don't refer you back to something you've already seen.
    To get you started, here is an excellent resource: The Java� Glossary

Maybe you are looking for

  • How to branding Social/Sites.aspx in SharePoint online

    Hi everyone, I need to branding: https://contoso-my.sharepoint.com/personal/dev01_contoso_onmicrosoft_com/Social/Sites.aspx. I use SharePoint designer to open the site: https://contoso-my.sharepoint.com/personal/dev01_contoso_onmicrosoft_com then go

  • AS2 ID and URL in BizTalk Server 2009

    Hi Everyone, I am new to BizTalk and have some very basic understanding of what it does. My client wants to send and receive EDIFACT files over AS2 protocol and they have asked for our AS2 ID, AS2 URL and certificate(public if I am right). Please fin

  • Upload a pdf file,fill it in, print it,it comes out blank

    Go to a pdf irs file complete it preview it all info correct Print it, it comes out as a blank form none of the info input is on the form

  • IPhone5 Screen Freeze

    Hi all, This is my first post so I will try and be as clear as possible so hopefully someone will be able to assist. I have purchased an iPhone 5 (original so not S or C) for my girlfriend from Ebay which was refurbished by a reputable dealer. This i

  • New feature that would be handy

    I do a lot of live recording, In one night the maybe 30 - 40 tracks played, Most of the time i treat them all with the same settings but not always, what i think would be a great idea is to turn material between markers into new arrangements, so for