Real-Time Scheduling

I've heard of some stuff going on in the kernel development, referred to as "real-time scheduling."  Could someone tell me what it is, and what it's benefits are?  I've been having a hard time finding information on it.

brebs wrote:
See wiki.
That took me about 0.05 seconds of googling
Note to self: drink coffee prior to doing anything on the computer.  Or caffeinated cereal.

Similar Messages

  • Oracle Enterprise Real-Time Scheduling (Sidewinder)

    Hi Friends,
    Any idea where can i find some documentation on Oracle Enterprise Real-Time Scheduling (Sidewinder)?
    Regards,
    Anirban Roy

    Start here:
    http://www.oracle.com/corporate/press/2007_mar/030507_Oracle%20Utilities%20MWM.htm
    Those contacts might lead you to the documentation

  • Need urgent help in Real Time scheduling algorithm implementation

    I am new to Java Real Time programming. I have faced a problem which is stated below:
    Declare tasks
    for
    if it is schedulable then run the task
    otherwise it is not schedulable
    for
    check priority with respect to deadline.
    select highest priority task
    for
    start higest priority task
    if the task is preemtable
    then switch lowest priority task to highest priority task and stop the lowest priority task
    after completion higest priority task, lowest priority task complete the rest of the work.
    this is actually a pseudocode for real time dynamic scheduling ( rate monotonic scheduling). can anyone please help me develop the code?

    PrinceOfLight wrote:
    I am new to Java Real Time programming.
    this is actually a pseudocode for real time dynamic scheduling Define "real-time" in your context. Most people who use the term don't use it correctly. If you truly mean "real-time," then you'll need real-time Java running on a real-time OS.
    can anyone please help me develop the code?What specific problems are you having? If you have no clue where to start, this site is not a good resource.

  • Oracle Real Time Scheduler(ORS) Integration

    Hi,
    Currently I am working on ORS integration with the help of web- services. I created a web-service client in java but I am not able to receive a response from the ORS. When I use the soapUI to send the request the response that I receive is an XSD.
    Can you please help me out as to how to get a response from ORS.
    Thanks in advance
    Vikram

    Hi,
    Hi,
    I am also looking for the same..
    Did you get any link or documents?
    Thanks,
    Dev

  • Real-time database or non real-time database

    I know the scheduling mechanism in Oracle database is somewhat like the Linux OS which is not real-time. Does Oracle provide real-time database which has some real-time scheduling guaranteeing each user transaction to meet with a deadline ? Thx a lot.

    The Berkley DB that Oracle owns has been ported to QNX which is a real-time operating system.
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/build_unix/qnx.html
    Why, out of curiosity, are you looking into Oracle for a real-time system? That implies that you're running a real-time operating system on your servers, which is certainly unusual. I've heard of some ports of 9i to real-time operating systems, but that would certainly be unusual.
    Justin

  • What is the difference between Real-Time (RR) and Time-Sharing (TT)?

    Hi Guys,
    I am trying to distinguish between LMS process priorities of Real-Time Vs Time sharing and was not able to find any good document explaining me the difference between them. May be one of you Gurus can explain the difference between TT and RR? Any metalink doc id or any url on this issue will be appreciated.
    --MM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Hi, Time Sharing scheduling is based on credits that are allocated to the process while creation and as long as process keep consuming the OS resources (CPU etc), its priority keeps on changing dynamically (Because process has lost credits) where as processes with Real Time Scheduling are given priority over other processes configured with Time Sharing (even kernel resources).
    In simple terms, if LMS is configured with Time Sharing scheduling on a very busy Oracle RAC system, LMS process may contend for the CPU resources, which may cause performance bottleneck in cluster environment.
    In Oracle Real Applicaiton Cluster, various organizations has gained performance benefits of setting few Oracle processes such as LMS with Real Time scheduling.
    Thanks & Regards
    -Harish Kalra

  • Scheduling in Real-Time Java

    Hello,
    I have some questions concerning how scheduling in fact is intended to be performed in a RTSJ based Real-Time Java System.
    As far as I understood, RTSJ requires pre-emptive priority-based dispatching of Schedulable objects.
    This means that the execution eligibility of a schedulable entity is mainly its priority.
    That causality is reflected within the specification with the (one-and-only specified) PriorityScheduler, which is the base scheduler for actual Real-Time Java applications.
    Furthermore, there is a notion of extensibility of that PriorityScheduler described by RTSJ,
    in order to provide further scheduling mechanims and feasibility analysis algorithms (please correct me if there are any wrong assumptions).
    This is the point, where everything becomes really weird to me ...
    As far as I could investigate, in most RTSJ implementations based on a POSIX compliant system underneath (like Java RTS does on RTLinux or Solaris)
    each (Realtime)JavaThread is mapped 1-to-1 to a light-weight process on the operating system level (e.g. a pthread).
    So far, we have no "green threads" within the JVM, but real LWPs scheduled by the OS.
    The difference between "normal" and "real-time" threads lies in the scheduling policy used for that mapping.
    While normal Java threads probably map to SCHED_RR or SCHED_OTHER, real-time threads are scheduled by the OS via the SCHED_FIFO policy in order to achieve a better real-time predictability.
    However, the OS's scheduling mechanisms automatically make decisions about the right positioning of a LWP within an appropriate run-queue, due to thread's preemption, blocking or release (even dynamic priority changes) activities and its scheduling policy.
    That's exactly why I ask myself, what is the need of a Scheduler representation within a JVM?
    Furthermore, how a Scheduler extension is able to incorporate with the threading model and the underlying scheduling mechanisms of the OS?
    One point could be a situation where a real-time JVM runs directly on top of the bare hardware and has to perform scheduling decisions on its own.
    The Scheduler API could then be understood as an extension mechanism of a kind of JVM-intern scheduler (e.g. the PriorityScheduler), thereby allowing scheduling decisions to be made even in user defined Scheduler implementations.
    A similar use case for an OS-based scenario could be if a JVM is intended to pass scheduling/threading routines of the underlying OS (eg. a part of the POSIX API)
    up to the Java application level in order to provide the opportunity for a kind of application defined scheduling (like e.g. in the MaRTE OS).
    Unfortunatelly, after introspecting the RTSJ API, both conclusions seem to me to be wrong.
    So far, Java RTS seems not to provide any mechanism for reaction on scheduling events/decisions, neither intra-JVM nor from an underlying OS outside of the JVM.
    Furthermore, there is no notion for incorporation with the base PriorityScheduler for making extended scheduling decisions.
    I hope this post could bring me more light into the scheduling idea behind Real-Time Java systems as intnded by the RTSJ.
    Sincerely,
    Vladimir

    Vladimir.Nikolov wrote:
    That means, that a scheduling policy different to PriorityScheduler can only be assigned to a Schedulable object if it is supported by the OS and the JVM?Well it has to be supported by that implementation of the RTSJ. Howe that is done - ie whether it requires OS support - depends on that VM, the OS and the actual scheduling policy.
    It also seems that at the current state of the art the PriorityScheduler representative within the JVM is intended only for manipulating a feasibility set of Schedulable objects (supporting online feasibility analysis)?
    However, since user-defined scheduling is not intended by the specification, applications have to rely on the feasibility analysis based on the underlying/supported scheduling mechanisms.
    Thus, in the current Java RTS implementation this would be the "default" feasibility mechanism based on the PriorityScheduler.
    Unfortunatelly I can not figure out the need of maintaining a feasibility set, since feasibility, as specified for the PriorityScheduler, is a simple asumption that we have "an adequatly fast machine to handle the periodic and sporadic load"?
    I actually assumed that feasibility analysis performs real cost budgeting taking into account deadlines and so on, but it seems to be specified simply to make a negative statement always when aperiodic tasks are involved ?The RTSJ scheduling framework provides support for feasibility analysis by defining the admission control methods eg setXXXIfFeasible. However the RTSJ does not, and can not, mandate any non-trivial feasibility algorithm because in simple terms no such general algorithms exist. There are some static feasibility tests in the literature and you could apply those offline to your application (assuming you can find the values of all the "magic" numbers in such formulae - which is generally not the case). At present the RTSJ doesn't support even these simple feasibility tests because blocking-time is not recorded in the release parameters - something being addressed in RTSJ 1.1. In any case unless there is a pluggable framework for feasibility tests it would be a waste of time for VMs to implement them given they can (more) easily be done offline using other tools.
    Only dynamic admission control is really of interest and as far as I am aware no such general dynamic admission control policies exist (anything you find in the literature is very context specific). So it is left up to an implementation as to whether they try and define dynamic admission control algorithms - and so far none have because they don't have one.
    In "Getting More Flexible Scheduling in the RTSJ" Wellings and Zerzelidis propose some (more or less) "minor" extensions to the RTSJ API in order to enable hierarchical scheduling within the fixed priority framework.
    Since Andy Wellings is a member of the RTSJ Technical Interpretation Committee, is there any attempt to evolve the specification in a similar direction as described above, in order to support more flexible scheduling mechanisms and feasibility analysis?If there is ever a RTSJ 2.0 then more sophisticated scheduling support is one of the items on the wishlist. But there's no guarantee there ever will be a RTSJ 2.0
    David Holmes

  • Program object schedule success but is not executed in real time

    Hi all,
    I have a batch file ad following is the content
    set path=C:\WINDOWS\system32
    echo Copying started at %date% %time%>>_date_.txt
    XCOPY "
    Boxis\c$\Program Files\Business Objects\Tomcat55\webapps\dswsbobje\WEB-INF\classes\qaawsWsdl.zip" "
    Sxi31\d$\Program Files\Business Objects\Tomcat55\webapps\dswsbobje\WEB-INF\classes" /s /a /d
    echo Copying finished at %date% %time%>>_date_.txt
    echo Completed Successfully at %date% %time%>>_date_.txt
    echo -
    >>_date_.txt
    pause
    I have uploaded this batch file in enterprise. When I schedule it the status is shown as Success however the file qaawsWsdl.zip is not copied to the destination.
    Note: When executing the batch file real time the file is getting copied
    I have added the user in Replace a process level token policy
    I have set program object rights to run script/binaries
    SIA is running on domain account that has access to source as well as destination folders.
    Am I missing something?
    Environment
    Business Objects XI 3.1 SP3 clustered (Also tried above in standalone, same issue)
    Windows Server 2003

    The program object will run with the credentials of the user you provided in the schedule panel unless you did not provide any and you have set a default user account in the CMC under Application->CmC->Program object settings. THis user has to have access on the source and target directories.
    What you can do is try to login manually on the BOBJ server using the credentials of the windows user that runs the program object and try to run the script from the command line.
    The domain account under which the SIA is running is not really relevant
    Regards,
    Stratos

  • Process Chain for Real Time Demon

    Please help I am stuck I followed the step by sdn but this is missing in step. how to create now process chain.
    I created the below
    DSO CONNECTED TO dATASOURCE via Trans,
    Real Time IP
    Real Time DTP
    assigned to Datasource and assigned the DS, IP, DTP to Deamon in RSRDA. NOW I started also manually via start all IP. but How to set the process chains now.
    PLEASE HELP ME STEP BY STEP TO PROCESS CHAIN SINCE i am new to this daemon in process chains
    Thanks
    Soniya
    null

    Hi
    refer to this
    CREATION OF PROCESS CHAINS
    Process chains are used to automated the loading process.
    Will be used in all applications as you cannot schedule hundreds of infopackages manually and daily.
    Metachain
    Steps for Metachain :
    1. Start ( In this variant set ur schedule times for this metachain )
    2.Local Process Chain 1 ( Say its a master data process chain - Get into the start variant of this chain ( Sub chain - like any other chain ) and check the second radio button " Start using metachain or API " )
    3.Local Process Chain 2 ( Say its a transaction data process chain do the same as in step 2 )
    Steps for Process Chains in BI 7.0 for a Cube.
    1. Start
    2. Execute Infopackage
    3. Delete Indexes for Cube
    4.Execute DTP
    5. Create Indexes for Cube
    For DSO
    1. Start
    2. Execute Infopackage
    3. Execute DTP
    5. Activate DSO
    For an IO
    1. Start
    2.Execute infopackage
    3.Execute DTP
    4.Attribute Change Run
    Data to Cube thru a DSO
    1. Start
    2. Execute Infopackage ( loads till psa )
    3.Execute DTP ( to load DSO frm PSA )
    4.Activate DSO
    5.Further Processing
    6.Delete Indexes for Cube
    7.Execute DTP ( to load Cube frm DSO )
    8.Create Indexes for Cube
    3.X
    Master loading ( Attr, Text, Hierarchies )
    Steps :
    1.Start
    2. Execute Infopackage ( say if you are loading 2 IO's just have them all parallel )
    3.You might want to load in seq - Attributes - Texts - Hierarchies
    4.And ( Connecting all Infopackages )
    5.Attribute Change Run ( add all relevant IO's ).
    Start
    Infopackge1A(Attr)|Infopackge2A(Attr)
    Infopackge1B(Txts)|Infopackge2B(Txts)
    /_____________________|
    Infopackge1C(Txts)______|
    \_____________________|
    \___________________|
    __\___________________|
    ___\__________________|
    ______ And Processer_ ( Connect Infopackge1C & Infopackge2B )
    __________|__________
    Attribute Change Run ( Add Infobject 1 & Infoobject 2 to this variant )
    1. Start
    2. Delete Indexes for Cube
    3. Execute Infopackage
    4.Create Indexes for Cube
    For DSO
    1. Start
    2. Execute Infopackage
    3. Activate DSO
    For an IO
    1.Start
    2.Execute infopackage
    3.Attribute Change Run
    Data to Cube thru a DSO
    1. Start
    2. Execute Infopackage
    3.Activate DSO
    5.Further Processing
    6.Delete Indexes for Cube
    7.Execute Infopackage
    8.Create Indexes for Cube

  • How to create a Real Time Interactive Business Intelligence Solution in SharePoint 2013

    Hi Experts,
    I was recently given the below requirements to architect/implement a business intelligence solution that deals with instant/real data modifications in data sources. After going through many articles, e-books, expert blogs, I am still unable to piece the
    right information to design an effective solution to my problem. Also, client is ready to invest in the best 
    infrastructure in order to achieve all the below requirements but yet it seems like a sword of Damocles that hangs around my neck in every direction I go.
    Requirements
    1) Reports must be created against many-to-many table relationships and against multiple data sources(SP Lists, SQL Server Custom Databases, External Databases).
    2) The Report and Dashboard pages should refresh/reflect with real time data immediately as and when changes are made to the data sources.
    3) The Reports should be cross-browser compatible(must work in google chrome, safari, firefox and IE), cross-platform(MAC, Android, Linux, Windows) and cross-device compatible(Tabs, Laptops &
    Mobiles).
    4) Client is Branding/UI conscious and wants the reports to look animated and pixel perfect similar to what's possible to create today in Excel 2013.
    5) The reports must be interactive, parameterized, slice able, must load fast and have the ability to drill down or expand.
    6) Client wants to leverage the Web Content Management, Document Management, Workflow abilities & other features of SharePoint with key focus being on the reporting solution.
    7) Client wants the reports to be scalable, durable, secure and other standard needs.
    Is SharePoint 2013 Business Intelligence a good candidate? I see the below limitations with the Product to achieve all the above requirements.
    a) Cannot use Power Pivot with Excel deployed to SharePoint as the minimum granularity of refresh schedule is Daily. This violates Requirement 1.
    b) Excel Services, Performance Point or Power View works as in-memory representation mode. This violates Requirement 1 and 2.
    b) SSRS does not render the reports as stated above in requirement 3 and 4. The report rendering on the page is very slow for sample data itself. This violates Requirement 6 and 7.
    Has someone been able to achieve all of the above requirements using SharePoint 2013 platform or any other platform. Please let me know the best possible solution. If possible, redirect me to whitepapers, articles, material that will help me design a effective
    solution. Eagerly looking forward to hear from you experts!.
    Please feel free to write in case you have any comments/clarifications.
    Thanks, 
    Bhargav

    Hi Experts,
    Request your valuable inputs and support on achieving the above requirements.
    Looking forward for your responses.
    Thanks,
    Bhargav

  • Abap-hr real time questions

    hi friends
    kindly send me ABAP-HR REAL TIME QUESTION to my mail [email protected]
    Thanks&Regards
    babasish

    Hi
    Logical database
    A logical database is a special ABAP/4 program which combines the contents of certain database tables. Using logical databases facilitates the process of reading database tables.
    HR Logical Database is PNP
    Main Functions of the logical database PNP:
    Standard Selection screen
    Data Retrieval
    Authorization check 
    To use logical database PNP in your program, specify in your program attributes.
    Standard Selection Screen
    Date selection
    Date selection delimits the time period for which data is evaluated. GET PERNR retrieves all records of the relevant infotypes from the database.  When you enter a date selection period, the PROVIDE loop retrieves the infotype records whose validity period overlaps with at least one day of this period.
    Person selection
    Person selection is the 'true' selection of choosing a group of employees for whom the report is to run.
    Sorting Data
    · The standard sort sequence lists personnel numbers in ascending order.
    · SORT function allows you to sort the report data otherwise. All the sorting fields are from infotype 0001.
    Report Class
    · You can suppress input fields which are not used on the selection screen by assigning a report class to your program.
    · If SAP standard delivered report classes do not satisfy your requirements, you can create your own report class through the IMG.
    Data Retrieval from LDB
    1. Create data structures for infotypes.
        INFOTYPES: 0001, "ORG ASSIGNMENT
                            0002, "PERSONAL DATA
                            0008. "BASIC PAY
    2. Fill data structures with the infotype records.
        Start-of-selection.
             GET PERNR.
        End-0f-selection. 
        Read Master Data
    Infotype structures (after GET PERNR) are internal tables loaded with data.
    The infotype records (selected within the period) are processed sequentially by the PROVIDE - ENDPROVIDE loop.
              GET PERNR.
                 PROVIDE * FROM Pnnnn BETWEEN PN/BEGDA AND PN/ENDDA
                        If Pnnnn-XXXX = ' '. write:/ Pnnnn-XXXX. endif.
                 ENDPROVIDE.
    Period-Related Data
    All infotype records are time stamped.
    IT0006 (Address infotype)
    01/01/1990   12/31/9999  present
              Which record to be read depends on the date selection period specified on the
              selection screen. PN/BEGDA PN/ENDDA.
    Current Data
    IT0006 Address  -  01/01/1990 12/31/9999   present
    RP-PROVIDE-FROM-LAST retrieves the record which is valid in the data selection period.
    For example, pn/begda = '19990931'    pn/endda = '99991231'
    IT0006 subtype 1 is resident address
    RP-PROVIDE-FROM-LAST P0006 1 PN/BEGDA PN/ENDDA.
    Process Infotypes
    RMAC Modules - RMAC module as referred to Macro, is a special construct of ABAP/4 codes. Normally, the program code of these modules is stored in table 'TRMAC'. The table key combines the program code under a given name. It can also be defined in programs.The RMAC defined in the TRMAC can be used in all Reports. When an RMAC is changed, the report has to be regenerated manually to reflect the change.
    Reading Infotypes - by using RMAC (macro) RP-READ-INFOTYPE
              REPORT ZHR00001.
              INFOTYPE: 0002.
              PARAMETERS: PERNR LIKE P0002-PERNR.
              RP-READ-INFOTYPE PERNR 0002 P0002 .
              PROVIDE * FROM P0002
                  if ... then ...endif.
              ENDPROVIDE.
    Changing Infotypes - by using RMAC (macro) RP-READ-INFOTYPE. 
    · Three steps are involved in changing infotypes:
    1. Select the infotype records to be changed;
    2. Make the required changes and store the records in an alternative table;
    3. Save this table to the database;
    The RP-UPDATE macro updates the database. The parameters of this macro are the OLD internal table containing the unchanged records and the NEW internal table containing the changed records. You cannot create or delete data. Only modification is possible.
    INFOTYPES: Pnnnn NAME OLD,
    Pnnnn NAME NEW.
    GET PERNR.
        PROVIDE * FROM OLD
               WHERE .... = ... "Change old record
               *Save old record in alternate table
               NEW = OLD.
        ENDPROVIDE.
        RP-UPDATE OLD NEW. "Update changed record
    Infotype with repeat structures
    · How to identify repeat structures.
    a. On infotype entry screen, data is entered in table form.
        IT0005, IT0008, IT0041, etc.
    b. In the infotype structure, fields are grouped by the same name followed by sequence number.
        P0005-UARnn P0005-UANnn P0005-UBEnn
        P0005-UENnn P0005-UABnn
    Repeat Structures
    · Data is entered on the infotype screen in table format but stored on the database in a linear  
      structure.
    · Each row of the table is stored in the same record on the database.
    · When evaluating a repeat structure, you must define the starting point, the increment and the
      work area which contains the complete field group definition.
    Repeat Structures Evaluation (I)
    · To evaluate the repeat structures
       a. Define work area.
           The work area is a field string. Its structure is identical to that of the field group.
       b. Use a DO LOOP to divide the repeat structure into segments and make it available for  
           processing in the work area, one field group (block) at a time.
    Repeat Structures Evaluation(II)
    Define work area
    DATA: BEGIN OF VACATION,
                  UAR LIKE P0005-UAR01, "Leave type
                  UAN LIKE P0005-UAN01, "Leave entitlement
                  UBE LIKE P0005-UBE01, "Start date
                  UEN LIKE P0005-UEN01, "End date
                  UAB LIKE P0005-UAB01, "Leave accounted
               END OF VACATION.
    GET PERNR.
         RP-PROVIDE-FROM-LAST P0005 SPACE PN/BEGDA PN/ENDDA.
         DO 6 TIMES VARYING VACATION
                 FROM P0005-UAR01 "Starting point
                     NEXT P0005-UAR02. "Increment
                 If p0005-xyz then ... endif.
          ENDDO.
    Processing 'Time Data'.
    · Dependence of time data on validity period
    · Importing time data
    · Processing time data using internal tables
    Time Data and Validity Period
    · Time data always applies to a specific validity period.
    · The validity periods of different types of time data are not always the same as the date selection period specified in the selection screen.
    Date selection period |----
    |
    Leave |----
    |
    · PROVIDE in this case is therefore not used for time infotypes.
    Importing Time Data
    · GET PERNR reads all time infotypes from the lowest to highest system data, not only those within the date selection period.
    · To prevent memory overload, add MODE N to the infotype declaration. This prevents the logical database from importing all data into infotype tables at GET PERNR.
    · Use macro RP-READ-ALL-TIME-ITY to fill infotype table.
    INFOTYPES: 2001 MODE N.
    GET PERNR.
        RP-READ-ALL-TIME-ITY PN/BEGDA PN/ENDDA.
        LOOP AT P0021.
             If P0021-XYZ = ' '. A=B. Endif.
        ENDLOOP.
    Processing Time Data
    · Once data is imported into infotype tables, you can use an internal table to process the interested data.
    DATA: BEGIN OF ITAB OCCURS 0,
                  BUKRS LIKE P0001-BUKRS, "COMPANY
                  WERKS LIKE P0001-WERKS, "PERSONNEL AREA
                  AWART LIKE P2001-AWART, "ABS./ATTEND. TYPE
                  ASWTG LIKE P2001-ASWTG, "ABS./ATTEND. DAYS
               END OF ITAB.
    GET PERNR.
    RP-PROVIDE-FROM-LAST P0001 SAPCE PN/BEGDA PN/ENDDA.
    CLEAR ITAB.
    ITAB-BUKRS = P0001-BURKS. ITAB-WERKS = P0001-WERKS.
    RP-READ-ALL-TIME-ITY PN/BEGDA PN/ENDDA.
    LOOP AT P2001.
          ITAB-AWART = P2001-AWART. ITAB-ASWTG = P2001-ASWTG.
          COLLECT ITAB. (OR: APPEND ITAB.)
    ENDLOOP.
    Database Tables in HR
    ·  Personnel Administration (PA) - master and time data infotype tables (transparent tables).
       PAnnnn: e.g. PA0001 for infotype 0001
    ·  Personnel Development (PD) - Org Unit, Job, Position, etc. (transparent tables).
       HRPnnnn: e.g. HRP1000 for infotype 1000
    ·  Time/Travel expense/Payroll/Applicant Tracking data/HR work areas/Documents (cluster  
       PCLn: e.g. PCL2 for time/payroll results.
    Cluster Table
    · Cluster tables combine the data from several tables with identical (or almost identical) keys
      into one physical record on the database.
    . Data is written to a database in compressed form.
    · Retrieval of data is very fast if the primary key is known.
    · Cluster tables are defined in the data dictionary as transparent tables.
    · External programs can NOT interpret the data in a cluster table.
    · Special language elements EXPORT TO DATABASE, IMPORT TO DATABASE and DELETE
      FROM DATABASE are used to process data in the cluster tables.
    PCL1 - Database for HR work area;
    PCL2 - Accounting Results (time, travel expense and payroll);
    PCL3 - Applicant tracking data;
    PCL4 - Documents, Payroll year-end Tax data
    Database Tables PCLn
    · PCLn database tables are divided into subareas known as data clusters.
    · Data Clusters are identified by a two-character code. e.g RU for US payroll result, B2 for
      time evaluation result...
    · Each HR subarea has its own cluster.
    · Each subarea has its own key.
    Database Table PCL1
    · The database table PCL1 contains the following data areas:
      B1 time events/PDC
      G1 group incentive wages
      L1 individual incentive wages
      PC personal calendar
      TE travel expenses/payroll results
      TS travel expenses/master data
      TX infotype texts
      ZI PDC interface -> cost account
    Database Table PCL2
    · The database table PCL2 contains the following data areas:
      B2 time accounting results
      CD cluster directory of the CD manager
      PS generated schemas
      PT texts for generated schemas
      RX payroll accounting results/international
      Rn payroll accounting results/country-specific ( n = HR country indicator )
      ZL personal work schedule
    Database Table PCL3
    · The database table PCL3 contains the following data areas:
      AP action log / time schedule
      TY texts for applicant data infotypes
    Data Management of PCLn
    · The ABAP commands IMPORT and EXPORT are used for management of read/write to
      database tables PCLn.
    · A unique key has to be used when reading data from or writing data to the PCLn.
      Field Name KEY Length Text
      MANDT X 3 Client
      RELID X 2 Relation ID (RU,B2..)
      SRTFD X 40 Work Area Key
      SRTF2 X 4 Sort key for dup. key
    Cluster Definition
    · The data definition of a work area for PCLn is specified in separate programs which comply  
       with fixed naming conventions.
    · They are defined as INCLUDE programs (RPCnxxy0). The following naming convention applies:
       n = 1 or 2 (PCL1 or PCL2)
       xx = Relation ID (e.g. RX)
       y = 0 for international clusters or country indicator (T500L) for different country cluster
    Exporting Data (I)
    · The EXPORT command causes one or more 'xy' KEY data objects to be written to cluster xy.
    · The cluster definition is integrated with the INCLUDE statement.
    REPORT ZHREXPRT.
    TABLES: PCLn.
    INCLUDE: RPCnxxy0. "Cluster definition
    Fill cluster KEY
    xy-key-field = .
    Fill data object
    Export record
    EXPORT TABLE1 TO DATABASE PCLn(xy) ID xy-KEY.
       IF SY-SUBRC EQ 0.
           WRITE: / 'Update successful'.
       ENDIF.
    Exporting Data (II)
    . Export data using macro RP-EXP-Cn-xy.
    · When data records are exported using macro, they are not written to the database but to a  
      main memory buffer.
    · To save data, use the PREPARE_UPDATE routine with the USING parameter 'V'.
    REPORT ZHREXPRT.
    *Buffer definition
    INCLUDE RPPPXD00. INCLUDE RPPPXM00. "Buffer management
    DATA: BEGIN OF COMMON PART 'BUFFER'.
    INCLUDE RPPPXD10.
    DATA: END OF COMMON PART 'BUFFER'.
    RP-EXP-Cn-xy.
    IF SY-SUBRC EQ 0.
        PERFORM PREPARE_UPDATE USING 'V'..
    ENDIF.
    Importing Data (I)
    · The IMPORT command causes data objects with the specified key values to be read from
       PCLn.
    · If the import is successful, SY-SUBRC is 0; if not, it is 4.
    REPORT RPIMPORT.
    TABLES: PCLn.
    INCLUDE RPCnxxy0. "Cluster definition
    Fill cluster Key
    Import record
    IMPORT TABLE1 FROM DATABASE PCLn(xy) ID xy-KEY.
       IF SY-SUBRC EQ 0.
    Display data object
       ENDIF.
    Importing data (II)
    · Import data using macro RP-IMP-Cn-xy.
    · Check return code SY-SUBRC. If 0, it is successful. If 4, error.
    · Need include buffer management routines RPPPXM00
    REPORT RPIMPORT.
    *Buffer definition
    INCLUDE RPPPXD00.
    DATA: BEGIN OF COMMON PART 'BUFFER'.
    INCLUDE RPPPXD10.
    DATA: END OF COMMON PART 'BUFFER'.
    *import data to buffer
    RP-IMP-Cn-xy.
    *Buffer management routines
    INCLUDE RPPPXM00.
    Cluster Authorization
    · Simple EXPORT/IMPORT statement does not check for cluster authorization.
    · Use EXPORT/IMPORT via buffer, the buffer management routines check for cluster
      authorization.
    Payroll Results (I)
    · Payroll results are stored in cluster Rn of PCL2 as field string and internal tables.
      n - country identifier.
    · Standard reports read the results from cluster Rn. Report RPCLSTRn lists all payroll results;
      report RPCEDTn0 lists the results on a payroll form.
    Payroll Results (II)
    · The cluster definition of payroll results is stored in two INLCUDE reports:
      include: rpc2rx09. "Definition Cluster Ru (I)
      include: rpc2ruu0. "Definition Cluster Ru (II)
    The first INCLUDE defines the country-independent part; The second INCLUDE defines the country-specific part (US).
    · The cluster key is stored in the field string RX-KEY.
    Payroll Results (III)
    · All the field string and internal tables stored in PCL2 are defined in the ABAP/4 dictionary. This
      allows you to use the same structures in different definitions and nonetheless maintain data
      consistency.
    · The structures for cluster definition comply with the name convention PCnnn. Unfortunately, 
       'nnn' can be any set of alphanumeric characters.
    *Key definition
    DATA: BEGIN OF RX-KEY.
         INCLUDE STRUCTURE PC200.
    DATA: END OF RX-KEY.
    *Payroll directory
    DATA: BEGIN OF RGDIR OCCURS 100.
         INCLUDE STRUCTURE PC261.
    DATA: END OF RGDIR.
    Payroll Cluster Directory
    · To read payroll results, you need two keys: pernr and seqno
    . You can get SEQNO by importing the cluster directory (CD) first.
    REPORT ZHRIMPRT.
    TABLES: PERNR, PCL1, PCL2.
    INLCUDE: rpc2cd09. "definition cluster CD
    PARAMETERS: PERSON LIKE PERNR-PERNR.
    RP-INIT-BUFFER.
    *Import cluster Directory
       CD-KEY-PERNR = PERNR-PERNR.
    RP-IMP-C2-CU.
       CHECK SY-SUBRC = 0.
    LOOP AT RGDIR.
       RX-KEY-PERNR = PERSON.
       UNPACK RGDIR-SEQNR TO RX-KEY-SEQNO.
       *Import data from PCL2
       RP-IMP-C2-RU.
       INLCUDE: RPPPXM00. "PCL1/PCL2 BUFFER HANDLING
    Function Module (I)
      CD_EVALUATION_PERIODS
    · After importing the payroll directory, which record to read is up to the programmer.
    · Each payroll result has a status.
      'P' - previous result
      'A' - current (actual) result
      'O' - old result
    · Function module CD_EVALUATION_PERIODS will restore the payroll result status for a period
       when that payroll is initially run. It also will select all the relevant periods to be evaluated.
    Function Module (II)
    CD_EVALUATION_PERIODS
    call function 'CD_EVALUATION_PERIODS'
         exporting
              bonus_date = ref_periods-bondt
              inper_modif = pn-permo
              inper = ref_periods-inper
              pay_type = ref_periods-payty
              pay_ident = ref_periods-payid
         tables
              rgdir = rgdir
              evpdir = evp
              iabkrs = pnpabkrs
         exceptions
              no_record_found = 1.
    Authorization Check
       Authorization for Persons
    ·  In the authorization check for persons, the system determines whether the user has the 
       authorizations required for the organizational features of the employees selected with
       GET PERNR.
    ·  Employees for which the user has no authorization are skipped and appear in a list at the end
       of the report.
    ·  Authorization object: 'HR: Master data'
    Authorization for Data
    · In the authorization check for data, the system determines whether the user is authorized to
      read the infotypes specified in the report.
    · If the authorization for a particular infotype is missing, the evaluation is terminated and an error
      message is displayed.
    Deactivating the Authorization Check
    · In certain reports, it may be useful to deactivate the authorization check in order to improve
      performance. (e.g. when running payroll)
    · You can store this information in the object 'HR: Reporting'.
    these are the main areas they ask q?

  • Best practice for real time requierement

    All,
    I am trying to find out what is the best practice for reporting against real time data ?
    is it while using
    1 - webi against universe/bex query on the top of hybrid cubes ?
    2 - Crystal report directly against the ECC data ?
    3 - using another solution such as Data Fedrator ? or something different ?
    I am looking to know if anyone got such req and also to share their experience 
    did they get some huge challenge against hybrid cubes ?
    Thanks in advance for your help
    Philippe

    Well their first requierement was to get real time data .. if i am in Xcelsius and click refresh then i want it to load my last data ..
    with live office , i can either schedule a crystal report and get the data delayed or use the option from live office to make iterfresh as right now .. is that a correct assumption ?
    I was talking about BW, just in case they are willing to change the requierement to go from Real time to every 5 min
    Just you know we are also thinking of the following option:
    1 - modify the virtual provider on the  CRM machine to get all the custom fields needed for the Xcelsius Dashboard
    2 - Build some interactive report on the top of these Virtual Provider within CRM
    3 - get the link to this report , it is one of the Report feature within CRM
    4 - design and build your dashboard on the top of it
    5 - EXport your swf file to the cRM web ui
    we are trying to see which one is the best one
    Philippe

  • Real time job doesn't receive automatically changes in SAP ECC 6

    Hi Experts,
    <br/>
    <br/>I'm trying to (using Data Integrator) automatically retrieve data from SAP ECC and store it in a local database table. I followed the steps written in this article: http://wiki.sdn.sap.com/wiki/display/BOBJ/Receiving+IDOCs. I'm modifying some records in the Cost Center Master Data (CSKS) and using COSMAS01 as the IDOC in which I'm trying to send the information.
    <br/>
    <br/>All seems to be OK, since I manually sent a Cost Center data row using the /nbd16 transaction, SAP ECC displayed a message telling that an IDOC had been generated. Also, when I checked the IDOC status in BD87 it had status 03, meaning that it had been sent correctly.
    <br/>
    <br/>In Data Services when I clicked on 'View Data' in the local database table that I put in the dataflow I could see the rows that I manually sent from the ECC. However, after truncating the local table, it bothered me that when I tried to send manually a second IDOC with another row from the Cost Center Master Data table, my real-time job didn't receive the request and consequently didn't insert the desired rows. I didn't change anything in the configuration, so my first question to any of you is that if you know what could it be that causes my real-time job to not receive the request from ECC?. I tried making a third try, but at my company they had to shut down the ECC server, so it's going to be a while before I make another try.
    <br/>
    <br/>Now, the REAL question of this post is that after I sent the first IDOC successfully (and before sending manually the second one mentioned earlier), I changed a record in the CSKS table directly in the ECC, hoping that it would automatically generate an COSMAS01 IDOC and send all the data in the table to the Data Integrator. This didn't happen and no IDOC was generated, so do any of you know why the automatic change didn't trigger the sending of the IDOC with the data?
    <br/>
    <br/>As I said before, I made the configuration in ECC and DI following the steps in the link written at the start of the post, and it was tested OK by successfully sending a COSMAS01 IDOC once, manually using bd16.
    <br/>
    <br/>In advance I thank you all for your cooperation. This is my first thread in the SDN forum so also please excuse any mistakes in my english.
    <br/>
    <br/>Best regards...

    You do not need to schedule this job every 10 min.  Why?
    If we can't schedule this job every 10 mints how can I able to retrive the delta records into Queue(RSA7)
    What is your advice I mean how to schedule in order to get the delta?
    Thanks

  • DPC spike at Windows automatic maintenance startup. Can not leave the computer alone processing real-time streams.

    This is what happens when I leave the computer idle for a while and the Windows automatic maintenance starts:
    Driver file      Description               ISR count  DPC count  Highest execution (ms)  Total execution (ms)
    ntoskrnl.exe  NT Kernel & System   0              50764        0,235854                    
     332,950426
    A DPC spike is generated by ntoskrnl.exe causing drop outs in real-time streams.
    JTS

    Hi Fjtorsol,
    We hope your issue has been resolved, if you've found solution by yourself. you could share with us and we will mark it as answer.
    This high Deferred Procedure Call (DPC) latencies are usually caused by certain drivers, if it is caused by automatic maintenance, please re-check your task schedule which is marked as “when computer is idle”. As what has been suggested by MVP ZigZag, please
    use the Microsoft Windows Performance Analyzer from the Windows Assessment and Deployment Kit (ADK) to identify the cause of any DPC latency spikes.
    https://www.microsoft.com/en-gb/download/details.aspx?id=39982
    DPC CPU Usage Summary Table will open containing a list of drivers/program. This list is already correctly sorted (by the Actual Duration column). The process on the very top of the list is therefore likely to be the cause of your problem.
    Regards
    D. Wu
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Real Time Communication

    HI Gurus,
    Can any one tell me what is  REAL TIME COMMUNICATION.
    Also I request information / details on RTCIS.
    Regards
    Ajoy

    Hi Ajoy,
    1.1 Application Scenarios for Real Time Communications
    There are two classes of applications: client and server. Client class applications have one real-time client per computer, such as the traditional instant messenger (IM) application. Server class applications typically act on behalf of multiple users or communicate with many hundreds of users simultaneously. Server class applications are often based around intelligent applications that interact with users. These applications can be divided into two categories: notification Apps that send information to a client and interactive Apps that accept and respond to a client. A third type of server class applications, Web-based clients, interacts with users through a Web server.
    1.2 Notification Applications
    Notification Apps are real-time applications that send information to multiple clients from a centralized server (see Figure 1). The one-way transmission means that clients cannot communicate directly with the notification App. Instead, clients must choose which events they wish to receive by using some other technique, such as a Web application.
    One example of a notification App is an application that notifies all of the users of a particular e-mail server that the server is about to go offline. Another useful notification App would send alerts in cases of severe weather.
    1.3 Interactive Apps
    Interactive Apps are applications that allow multiple clients to communicate with a central server in real-time, as Figure 2 shows. They are different from notification bots in that interactive bots support two-way communications with a client. Using this approach, you can build an application that interacts with users in real-time. Within this scenario, there are two main sub-scenarios. The first provides a user with information and waits for the user to respond, such as an application that notifies users about changes in stock prices and then gives the user the option to buy or sell. The second waits for the user to request a session with the App and then responds to requests that the user supplies, such as a calendar application that allows a user to schedule meetings and other events while receiving reminders just prior to the meeting or event.
    1.4 Web-Based Clients
    Web-based clients provide the same basic functionality as the traditional IM client through a Web interface, thus allowing the widest possible audience to use the application, as Figure 3 illustrates. It also has the side effect of eliminating the need for a user to download local software, which reduces user concerns about the download containing a potential virus. These types of clients are useful to organizations that wish to provide a Web-based front end to their internal IM system. For example, a company might wish to use a Web-based IM client to connect customers with a support group. Doing so maximizes the number of customers that can connect with the support group.
    Real Time Communications Data Flow
    It's critical that organizations planning large deployments of real-time communications applications ensure that those applications can scale to meet the desired goals. The RTC Client API is very efficient for client class applications for which each client runs on its own computer. To build a scalable RTC Client API application that services multiple clients with a single computer, you need to ensure that the application is scalable when you design it.
    The Real-time Communications (RTC) Client API is a set of COM interfaces and methods designed to create PC-PC, PC-phone, phone-phone audio/video calls, or text-only Instant Messaging (IM) sessions over the Internet. Application sharing and whiteboard (An application that displays a window for two users to exchange information) can also be added to PC-PC sessions. Presence information is used to track the location of buddies (or contacts) for communication purposes. This information is available through the RTC Client API on a SIP registrar server.
    The RTC Client API:
    1.Supports multiparty phone-phone calls
    2.Uses SIP-based signaling and presence communications
    3.Integrates with the Microsoft Office RTC proxy and registrar server
    4.Supports provisioning with ITSPs or third-party corporate-deployed servers
    5.Integrates signals over IP and PSTN networks
    I hope this will be useful for you.
    Thanks,
    Swamy Kunche

Maybe you are looking for