Hi experts real time environment queation

please guide me
1. in real time how will i get my object.
2. after completion of my object how can i test my object which method i can use who will send test data.
3. who can release my object (it is abaper role to release using se09 or se10 or any other)
4. whom can i submit my object (T.L or any other)
5. TL what he can do and how he test my object.
6. who can  TL submit the object PL or PM or basisguy
7. who will transport the request?
could u please send answers
thanks to advance

hi Sayeed,
Please find below, answers to your questions:
1. in real time how will i get my object.
<u>Ans:</u> The objects in a project will be allocated by the Technical lead to the developers.
2. after completion of my object how can i test my object which method i can use who will send test data.
<u>Ans:</u> In typical SAP developments, as a developer you will be doing the unit testing, with the data provided in the test scripts.
3. who can release my object (it is abaper role to release using se09 or se10 or any other)
<u>Ans:</u> Generally, the developer will have a request for the objects he is assigned with; and he will be releasing the request from SE09, after the unit testing and code inspector checks.
4. whom can i submit my object (T.L or any other)
<u>Ans:</u> Once you are done with your object, this has to be communicated to your TL (infact, he will be monitoring your work/tasks).
5. TL what he can do and how he test my object.
<u>Ans:</u> The TL may assign the object to QA resource for doing code review, etc, or he himself might do the same. Also he will ensure the object is working fine, by doing testing from his side too.
6. who can TL submit the object PL or PM or basisguy
<u>Ans:</u>  The TL deliver the objects to the PRoject manager/Program manager as the case may be.
7. who will transport the request?
<u>Ans:</u> The basis resources will be resoponsible for doing the transports across the landscape.
Hope this helps,
Sajan Joseph.

Similar Messages

  • Hi experts real time questions plz post ans

    i am confusing supporting and development kindly guide me
    supporting doubts:
    1. in supporting where we can find servers(dev, qua, pro) is client place or in our company.
    2. if it is client place how we can connect to client is there any tool to connect with client.
    3. client send tickets to developer or pm or tl.(will we get tickets through PM or TL )
    4. as a developer what type tickets we can solve and monthly minimum how many tickets we  can solve.
    development doubts:
    1. implementation/development where we can find servers
    2. in company which tool to use to communicate to developer, tl and pm.
    3. for example servers are in client place uid and password our basis provide or others.
    thanks to advance

    Hi,
    In a real time environment, the following process will be followed in general
    1. Most of the organizations will prepare financial close cycle timelines for the entire fiscal year in advance. This activity will be taken care of fiance department
    2. Get an approval from financial director for the close cycle timeline and make it to frozen state unless any exception comes in
    3. Based on the timelines, IS department will prepare the maintenance schedule of Hyperion EPM instance by keeping in mind that there should be no maintenance activity during the close cycle time
    4. The maintenance schedule will get approved by IS department and then taken forward to finance department for final approval. If any changes\recommendations suggested by finance department then it will be incorporated by IS department and get the final approval
    5. Notify the HFM end users about the maintenance schedule well advance by an e-mail communication,teamsite , business meetings etc
    6. Start sending reminder mail about the scheduled maintenance activity one day to all the HFM end users before the activity starts
    7. Start sending the final reminder mail about the scheduled maintenance activity one hour before the activity starts
    8. Once the maintenance activty is completed, send an email communication to all the HFM end users that the HFM system is up and running now
    9. In case of any un expected issue during the maintenance activity which might lead to increase in the downtime required for the maintenance activity.Inform the key stake holders at the earliest and inform the end users accordingly
    Thanks..

  • Java concurrent in real time environment

    Hi David,
    we intend to use in our implementation classes from java.util.concurrent.
    for example:
    BlockingQueue, hashMap etc... .
    i read in some documentation from 2006 that jrts not working well
    with this packages.
    is it still relevant for now???
    second question (it more request than a question):
    when we read jrts version we are getting 1.0.2 even our jrts version is 2.1.
    i guess you just need to update the version.
    Thanks
    Gabi

    Hi Lior,
    Priority inversion is about anything that can delay a high priority thread waiting for a low priority one which itself is prevented from running because middle priority threads are running. This is not only about the "synchronize" statement or "lock". This is about any scheme that allows threads to cooperate.
    When you use "synchronize", a RTSJ implementation will use PIP to ensure that the low priority thread is boosted to the high priority. This basically ensure that the delay for the high priority thread is minimized. Thus, using "synchronize" is a way to avoid priority inversion in your application, not what causes the priority inversion.
    However, when you use the locks defined in j.u.c (like java.util.concurrent.locks.ReentrantLock), the high priority thread can be blocked on that lock and will not boost the low priority thread. If there are several thread trying to acquire the lock, it is not even guaranteed that the highest priority contender will get it when it is released. Other policies like FIFO can be implemented. It might in some cases be the right policy for your application... but you must be aware of the policy if this is in the time-critical part of your application.
    Similarly, suppose you implemented the management of a resource with a java.util.concurrent.Semaphore. The low priority thread could own a resource and is expected to release() it when it no longer needs the resource. If the high priority thread needs the same resource, it has no way to boost the low priority thread to speed-up the process. In the worst case, the low priority thread could be completely preempted by other real-time threads and may never use and release the resource. This applies to all the synchronizers in j.u.c (CyclicBarrier, Exchanger, ...). This is not necessarily an issue. This may be the behavior you want.
    Last example, suppose you want to use a ThreadPoolExecutor to automatically parallelize some work. You must be aware that this does not take into account the priority of the thread that needs that job to be executed. Once again, this is OK if this work is not in the time-critical part of your application and if the thread that is waiting for the job does not own a resource that could be necessary for a higher priority thread.
    As a summary, if you use j.u.c, you must take into account the fact that there will be no boosting and no guarantee based on the priority. If you are sure that priorities do not matter, then you can use j.u.c. and benefit from these extended APIs .
    Bertrand.
    Edited by: delsart on Feb 18, 2009 5:11 PM

  • Hi experts doubt real time environment

    hi abapers could u please guide me
    could please give me correct examples
    userid                      :
    system id                :
    requestnumber         :
    devlopmentclient no  :
    quality serverno        :
    produaction no         :

    Hi Sayeed,
    User ID: It is ID created by the system administrator.Genarlly they follow some
                 standard for this.My company follow it as Last name + first Character
                 of Name So,I become RAJANN
    System ID: This is system id again assigned by Basis team(system people).Generally,it denotes the system number in whole landscape if you have same kind of instance.Suppose you have three same kind of system ,you can put it as 01,02,03 etc.
    Request Number: This is a request number assigned for making any changes in system.It is just like a number using which you can track the changes made in system in terms of development or any configurational changes.These number are transported from one system to another system.
    SAP ask to apply atleast three environement.
    DEV-Development system where all changes are done and you create transport  request here in terms of development & Configuration.
    QAC:Transport are moved into QAC system.In QAC system you do testing of the object and do quality check.
    PRD-Production system is one where all transaction and other activity of enterprise happen .
    So a transport request move changes from DEV->QAC->PRD using request number.
    Quality server number: This is just like IP address ,a number assigned to your system,it is like address of your system in entire network landscape.For understanding purpose you can assume this system as simple computer and all computer have some IP address so Quality server number will also have some IP address and this refer exactly that.
    Production System: It is exactly as explianed above.
    Regards,nishant
    Please reward points if this helps

  • What is the best way to store data on a network hard drive using CompactRIO RTOS and Labvew Real time?

    HI!
    I'm starting a project in which I have a low rate stream of data to read in a real time environment. I should store these data on a network hard disk without any PC with standard OS, I just have CompactRIO RTOS. How can I send this data to the network drive? Is it possible to just “write” data like I do for a standard file in LabView?
    Thanks for any help!!
    Il Conte
    dr. Valentino Tontodonato

    Il Conte,
    you have to keep in mind that normally the RT OS does not map drives other than the Compact Flash that it has onboard (C:\). There are exceptions such as
    -cFP-20xx which may have additional Flash Drives which can be addressed as D:\ Drive
    -CVS systems with IEEE-1394 interface can write/read to Firewire external Harddrives
    -PXI Controllers booted from a Floppy disk may map the floppy drive as A:\
    One solution to your needings may be to write data to files locally on your onboard CompactFlash and then transfer these files to a network location using FTP, provided the network drive you are pointing to supports FTP.
    Let us know if you need any more help with this,
    AlessioD
    National Instruments

  • Real time questions for HFM

    Hi, Gurus'
    Can anyone plz answer for the below questions
    1. How to restart the services in Real time.
    Example: If workspace service need to be restarted. what kind of procedure we need to follow for this situation?
    2. How to overcome system performance issues specially in month ends, while number of concurrent entities loading the data in to HFM Application at the same time?
    Do we need to follow a protocol like one day few entities data loading, next day few more entities data loading.... like this??
    3. How frequently we need to take the backups for the data and applications? like daily,weekly or monthly?
    Example: if it is on Daily basis, how to restrict the other entities from being loading the data in to HFM Application while taking backups for the particular timings? is it effects the Data Accuracy?
    4.What is the best approach to move the DEV application from DEV to Production environment for which few accounts or Entities added recently.
    Means CopyApplication utility, migration or extracting metadata and loading to Productions application??
    5. what are the common steps will take in real time, to inform all the entities to log out the HFM application for maintenance or any other purpose?
    Thanks for your help in advance
    Regards
    Smilee

    Hi,
    In a real time environment, the following process will be followed in general
    1. Most of the organizations will prepare financial close cycle timelines for the entire fiscal year in advance. This activity will be taken care of fiance department
    2. Get an approval from financial director for the close cycle timeline and make it to frozen state unless any exception comes in
    3. Based on the timelines, IS department will prepare the maintenance schedule of Hyperion EPM instance by keeping in mind that there should be no maintenance activity during the close cycle time
    4. The maintenance schedule will get approved by IS department and then taken forward to finance department for final approval. If any changes\recommendations suggested by finance department then it will be incorporated by IS department and get the final approval
    5. Notify the HFM end users about the maintenance schedule well advance by an e-mail communication,teamsite , business meetings etc
    6. Start sending reminder mail about the scheduled maintenance activity one day to all the HFM end users before the activity starts
    7. Start sending the final reminder mail about the scheduled maintenance activity one hour before the activity starts
    8. Once the maintenance activty is completed, send an email communication to all the HFM end users that the HFM system is up and running now
    9. In case of any un expected issue during the maintenance activity which might lead to increase in the downtime required for the maintenance activity.Inform the key stake holders at the earliest and inform the end users accordingly
    Thanks..

  • Real Time Third Party Software Integration

    I posted this in the PI forum and it was suggested to post it here as well.  I am new to SAP but have worked with PeopleSoft and MS Dynamics AX for several years.  My company is implementing FI/CO and our partner is telling us that we will interface our third party application via flat files that are scheduled.  We are use to running in a real time environment so this is a little hard to swallow.
    Here is an example of what I am trying to do:
    Our third party software will be the starting point for customers. When a new customer is created we need to create that customer in SAP as well (for use with AR). Currently we open a connection to the existing financial application, verify that the customer does not exist and insert the data into the table.
    We all agree that we do not want to write directly to the SAP table(s).
    What we would like to do is when the user saves the record in the third party app, open a connection to SAP, pass the data to the BAPI, wait for a return code then complete the transaction.
    The third party application is written in PowerBuilder 11 and is able to connect to most any database, talk .Net, and call external API's.
    Please advise if this is possible and if so, a link to an example would be great.
    Thanks,
    Scott

    You may use RFC protocol to connect to SAP system. You download RFC library from download center and install it on your third-party system.
    Check threads.

  • Pipe in real time

    Hi David,
    we want to know:
    is there any problem to not use pipes (PipedInputStream PipedOutputStream PipedReader PipedWriter)
    in real time environment.
    Gabi

    None of the standard Java libraries were designed for real-time use. How effective using Pipes will be depends on how you want to use them. Internally these classes use synchronized methods (good) plus wait/notify (bad). The use of wait/notify has two problems:
    a) in general if the waiter and the notifier are different priorities then you can get a priority inversion. A high priority thread could be waiting for a low-priority thread to pass some data, but a medium priority thread prevents the low-priority thread from running.
    b) In particular this code always does the wait with a 1 second timeout, so if the communication rates are low then threads waiting for data will wake up once a second check for more data and go back to waiting. Similarly if a producer outpaces a consumer then the producer blocks waiting for space to write, but wakes up once a second to see if there is space. This will introduce jitter and related non-determinism.
    You can deal with (a) by using a single thread at each end of the pipe with the same priorities. But you can't do anything about (b).
    David Holmes

  • Event Structure in Timed Loop in real time cannot work

    I am a new user for LabVIEW. And I met a problem which really frastre me!!! Hope someone can help me out. Thanks in advance!
    I simply want to use event structure under timed loop, which is extrmely important in my design.
    However, this works very good in my computer (without connect to FPGA).
    Once I connect it to FPGA, then I can run it still but there is no response!
    My file is attached. Please somebody helps me!
    Looking forward to your answers!
    Solved!
    Go to Solution.
    Attachments:
    Test for Timed Loop.vi ‏9 KB

    The FPGA runs headless. Event structures are not going to work. What you need to do is have an application on your host where the user presses a button, changes a value, etc. That event should send a message via TCP/IP to the code running in the real time environment. Then the real time environment should set a control on the FPGA to the value you want.
    In general, real time programming with FPGA has multiple layers.
    1) Host code-> handles user interactions and communicates them to real time code via TCP,UDP, etc. Displays data to user sent from the RT controller.
    2) Real time code->runs headlessly. Handles messages from Host code, processes FPGA data, communicates with FPGA much like the host code communicates with the real time code
    3) FPGA -> does acquistion and passes it via FIFO to the RT
    The first thing you need to do is understand the architecture and how all these pieces of the puzzle work together before throwing things down on a diagram.
    CLA, LabVIEW Versions 2010-2013

  • What are the main parts in ABAP Programing to work in Real Time ?

    Hi
    I would like to know hat are the main/important parts in ABAP Programing to work in real time environment.
    Moderator message : Search for available information. Thread locked.
    Edited by: Vinod Kumar on Aug 1, 2011 9:50 AM

    Hi Ashok,
    There are so many programming parts such as Function modules, report programs, workflows, smartforms, webdynpro, adobe forms, scripts etc.
    In which context you want answer can you please tell ?
    Regards,
    Aabha

  • The Problem about Monitoring Motion using PCI-7358 on LabVIEW Real Time Module

    Hello everyone,
    I have a problem about monitoring the position of an axis. First let me give some details about the motion controller system I’m using.
    I’m using PCI-7358 as controller and MID-7654 as servo driver, and I’m controlling a Maxon DC Brushed motor. I want to check the dynamic performance of the actuator system in a real time environment so I created a LabVIEW function and implemented it on LabVIEW Real Time module.
    My function loads a target position (Load Target Position.vi) and starts the motion. (Start.vi) then in a timed loop I read the instantaneous position using Read Position.vi. When I graphed the data taken from the Read Position.vi, I saw that same values are taken for 5 sequential loops. I checked the total time required by Read Position.vi to complete its task and it’s 0.1ms. I arranged the loop that acquires the data as to complete its one run in 1ms. But the data shows that 5 sequential loops read the same data?

    Read Position.flx can execute much faster than 5 ms but as it reads a register that is updated every 5 ms on the board, it reads the same value multiple times.
    To get around this problem there are two methods:
    Buffered High-Speed-Capturing (HSC)
    With buffered HSC the board stores a position value in it's onboard buffer each time a trigger occurrs on the axis' trigger input. HSC allows a trigger rate of about 2 kHz. That means, you can store a position value every 500 µs. Please refer to the HSC examples. You may have to look into the buffered breakpoint examples to learn how to use a buffer, as there doesn't seem to be a buffered HSC example available. Please note that you need an external trigger-signal (e. g. from a counter of a DAQ board). Please note that the amount of position data that you can acquire in a single shot is limited to about 16.000 values.
    Buffered position measurement with additional plugin-board
    If you don't have a device that allows you to generate a repetitive trigger signal as required in method 1.), you will have to use an additional board, e. g. a PCI-6601. This board provides four counter/timers. You could either use this board to generate the trigger signal or you could use it to do the position capturing itself. A PCI-6601 (or an M-Series board) provides can run a buffered position acquisition with a rate of several hundred kHz and with virtually no limitation to the amount of data to be stored. You could even route the encoder signals from your 7350 to the PCI-6601 by using an internal RTSI cable (no external wiring required).
    I hope this helps,
    Jochen Klier
    National Instruments

  • Hi Experts! Clarififcation regardng the phases of project in real time

    Hi ,
    Can any body please explain the phases of project and thier details like wht all will be done at each stage in the real time since i am very new to tht kind of phases ..
    Please donot kindly send me any links for reference rather plz describe it in detail..
    Regards,
    Eshwant....

    Hi,
    Implementation processes:
    Project preparation
    The project preparation phase, depicted below, focuses at two main activities, i.e. to make a setup for the TSO and to define a solution vision. These activities allow an organization to put in on the right track towards implementation.
    Design and initially staff the SAP TSO
    TSO chart exampleThe first major step of the project preparation phase is to design and initially staff an SAP technical support organization (TSO), which is the organization that is charged with addressing, designing, implementing and supporting the SAP solution. This can be programmers, project management, database administrators, test teams, etc. At this point, the focus should be at staffing the key positions of the TSO, e.g. the high-level project team and SAP professionals like the senior database administrator and the solution architect. Next to that, this is the time to make decisions about choosing for internal staff members or external consultants.
    The image at the right shows a typical TSO chart.
    Craft solution vision
    The second project preparation job is to define a so-called solution vision, i.e. a vision of the future-state of the SAP solution, where it is important to address both business and financial requirements (budgets). The main focus within the vision should be on the company’s core business and how the SAP solution will better enable that core business to be successful. Next to that, the shortcomings of the current systems should be described and short but clear requirements should be provided regarding availability (uptime), security, manageability and scalability of the SAP system.
    Sizing and blueprinting
    The next phase is often referred to as the sizing and blueprinting phase and forms the main chunk of the implementation process
    Perform cost of ownership analysis
    Figure 5: Solution stack delta analysisThis phase starts with performing a total cost of ownership analysis (TCO analysis) to determine how to get the best business solution at the lowest costs. This means to compare SAP solution stack options and alternatives and then determine what costs each part of the stack will bring and when these costs will be incurred. Parts of the stack are for example the hardware, operating system and database, which form the acquisition costs. Next to that, there should be taken a look at recurring costs like maintenance costs and downtime costs. Instead of performing a complete TCO analysis for various solution stack alternatives that would like to compare, it can be wise just to do a so-called delta analysis, where only the differences between solutions (stacks) are identified and analyzed. The image at the right depicts the essence of a delta analysis.
    Identify high availability and disaster recovery requirements
    The next step is identifying the high availability requirements and the more serious disaster recovery requirements. This is to plan what to do with later downtime of the SAP system, caused by e.g. hardware failures, application failures or power outages. It should be noted that it is very important to calculate the cost of downtime, so that an organization has a good idea of its actual availability requirements.
    Engage SAP solution stack vendors
    Figure 6: Simplified SAP solution stackA true sizing process is to engage the SAP solution stack vendors, which is the next step. This means selecting the best SAP hardware and software technology partners for all layers and components of the solution stack, based on a side-by-side sizing comparison. The most important factors that are of influence here are the estimated numbers of (concurrent) users and batch sizes. A wise thing to do is to involve SAP AG itself to let them create a sizing proposal stating the advised solution stack, before moving to SAP’s technology partners/SAP vendors, like HP, Sun Microsystems and IBM. A simplified solution stack is depicted at the right, showing the many layers for which software and hardware has to be acquired. Note the overlap with the OSI model.
    Staff TSO
    The TSO is the most important resource for an organization that is implementing SAP, so staffing the TSO is a vital job which can consume a lot of time. In a previous phase, the organization should already have staffed the most vital positions. At this point the organization should staff the bulk of the TSO, i.e. fill the positions that directly support the near-term objectives of the implementation, which are to develop and begin the installation/implementation of the SAP data center. Examples are: data center experts, network infrastructure experts, security specialists and database administration experts.
    There are many ways to find the right people within or outside the organization for all of the TSO positions and it depends on the organization how much time it wants to spend on staffing.
    Training
    One of the most vital stages of the implementation process is training. Very few people within an organization are SAP experts or even have worked with SAP software. It is therefore very important to train the end users but especially the SAP TSO: the people who design and implement the solution. Many people within the TSO need all kinds of training. Some examples of these positions:
    SAP Network Specialists
    SAP Database Administrators
    SAP Security specialists
    Documentation specialists
    Et cetera
    All of these people need to acquire the required SAP knowledge and skills or even SAP certifications through training. Moreover, people need to learn to do business in a totally new way. To define how much SAP training every person needs, a company can make use of a skillset matrix. With this matrix, a manager can identify who possesses what knowledge, to manage and plan training, by defining the height of expertise with a number between e.g. 1 and 4 for each skill for each employee.
    Setup SAP data center
    The next step is to set up the SAP data center. This means either building a new data center facility or transforming the current data center into a foundation capable of supporting the SAP solution stack, i.e. all of the technology layers and components (SAP software products) in a productive SAP installation. The most important factor when designing the data center is availability. The high availability and disaster recovery requirements which should have been defined earlier, give a good idea of the required data center requirements to host the SAP software. Data center requirements can be a:
    Physical requirement like power requirements
    Rack requirement
    Network infrastructure requirement or
    Requirement to the network server.
    Perform installations
    The following step is to install the required SAP software parts which are called components and technological foundations like a web application server or enterprise portals, to a state ready for business process configuration. The most vital sub steps are to prepare your OS, prepare the database server and then start installing SAP software. Here it is very important to use installation guides, which are published for each SAP component or technology solution by SAP AG. Examples of SAP components are:
    R/3 Enterprise — Transaction Processing
    mySAP BI — Business Information Warehouse
    mySAP CRM — Customer Relationship Management
    mySAP KW — Knowledge Warehouse
    mySAP PLM — Product Lifecycle Management
    mySAP SCM — Supply Chain Management
    mySAP SEM — Strategic Enterprise Management
    mySAP SRM — Supplier Relationship Management
    Round out support for SAP
    Before moving into the functional development phase, the organization should identify and staff the remaining TSO roles, e.g. roles that relate to helpdesk work and other such support providing work.
    [edit] Functional development
    The next phase is the functional development phase, where it is all about change management and testing. This phase is depicted below.
    Figure 7: Functional development phase
    Address change management
    The next challenge for an organization is all about change management / change control, which means to develop a planned approach to the changes the organization faces. The objective here is to maximize the collective efforts of all people involved in the change and to minimize the risk of failure of implementing the changes related to the SAP implementation.
    The implementation of SAP software will most surely come with many changes and an organization can expect many natural reactions, i.e. denial, to these changes. To fight this, it is most important to create a solid project team dedicated to change management and to communicate the solution vision and goals of this team. This team should be prepared to handle the many change issues that come from various sources like:
    End-user requests
    Operations
    Data center team
    DBA group
    Systems management
    SAP systems and operations management
    Next thing is to create a foundation for the SAP systems management and SAP computer operations, by creating a SAP operations manual and by evaluating SAP management applications. The manual is a collection of current state system documentation, day-to-day and other regularly scheduled operations tasks, various installation and operations checklists and how-to process documents.
    Functional, integration and regression testing
    Testing is very important before going live with any system. Before going live with a SAP system, it is vital to do many different kinds of testing, since there is often a large, complex infrastructure of hardware and software involved. Both requirements as well as quality parameters are to be tested. Important types of testing are:
    Functional testing: to test using functional use cases, i.e. a set of conditions or variables under which a tester will determine if a certain business process works
    Integration testing
    Regression testing
    All tests should be preceded by creating solid test plans.
    [edit] Final preparation
    The last phase before going live can be referred to as the final preparation phase and is depicted below.
    Figure 8: Final preparation phase
    Systems and stress testing
    Another vital preparation activity before going live with SAP is systems and stress testing. This means planning, scripting, executing and monitoring system and stress tests, to see if the expectations of the end users, defined in service level agreements, will be met. This can be done with SAP’s standard application benchmarks, to benchmark the organization’s configurations against configurations that have been tested by SAP’s hardware technology partners. Again, a test plan should be created at first.
    Prepare for cutover
    The final phase before going live with SAP is often referred to as the cutover phase, which is the process of transitioning from one system to a new one. The organization needs to plan, prepare and execute the cutover, by creating a cutover plan that describes all cutover tasks that have to be performed before the actual go-live. Examples of cutover tasks are:
    Review and update all systems-related operations procedures like backup policies and system monitoring
    Assign ownership of SAP’s functional processes to individuals
    Let SAP AG do a GoingLive check, to get their blessing to go live with the system
    Lock down the system, i.e. do not make any more changes to the SAP system
    [edit] Go Live
    All of the previously described phases all lead towards this final moment: the go-live. Go-live means to turn on the SAP system for the end-users and to obtain feedback on the solution and to monitor the solution. It is also the moment where product software adoption comes into play. More information on this topic:
    Product Software Adoption: Big Bang Adoption
    Product Software Adoption: Parallel Adoption
    Product Software Adoption: Phased Adoption
    HTH
    Regards,
    Dhruv Shah

  • Office Web Apps Server 2013 / Real-Time co-authoring

    Hello,
    In general, I would like
    to ask the development team or product
    manager SharePoint /Office web Apps.
    As you already know in the cloud
    product version introduces new features. Such as
    authoring and other unlike on-premises version . This link demonstrate it.
    http://blogs.office.com/2013/11/06/collaboration-just-got-easier-real-time-co-authoring-now-available-in-office-web-apps/ . 
    Please tell me the plans or roadmaps for the implementation
    of these functions in the server version. Or Microsoft
    has no plans to develop this area?

    great, and as i said, we are working to keep parity, but there are still many features which are technically considered in preview in the O365 environment.  after several months, depending on stability, feedback, and whether the feature fits well with
    the product, it may either be tested to go into on-prem farms and come out of preview or be dropped altogether (auto-hosted apps).
    This feature is definitely one that is key to the future of the product, but there isn't a timeline on when it will come to onprem.  there ia a lot of additional testing before anything can be integrated to on-prem as the environments have many
    more variables than the rather rigid architecture at O365, as well as the fact that the o365 farms have many of the preview features and so then anything to go into a feature for on-prem needs to be re-vetted in farms without those extras and checked for dependencies
    (not normally an issue, but still part of the process).
    Christopher Webb | Microsoft Certified Master: SharePoint 2010 | Microsoft Certified Solutions Master: SharePoint Charter | Microsoft Certified Trainer| http://tealsk12.org Volunteer Teacher | http://christophermichaelwebb.com

  • File not found when trying to call a dll on LabVIEW Real Time machine

    I have a dll called "DLLRTTEST" that I've written, and have succesfully called on my host machine.  I'm now attempting to call this dll from a vi that is located on my real time computer.  Currently I get an "Error 7 occurred at Call Library Function Node in DLLRTTEST.vi." message upon execution
    In the attached screenshot I'm trying to ensure that the vi I'm running is in fact located on the real time system.  I then use a "Check if File or Folder Exists.vi" to confirm that the dll that I'm about to call does exist on the real time system as well.  However, I still receive an "error 7 file not found" error from the Call Library Function Node.
    Any help is appreciated.
    Solved!
    Go to Solution.
    Attachments:
    DLL_Call_Screenshot.png ‏61 KB
    DLL_Call_Screenshot.png ‏61 KB

    As nathand already mentioned, depending on your C toolchain your DLL will depend on other DLLs. Usually the according msvcrtXX.dll that matches your Visual C version, if you use Visual C, other runtime DLLs if you use a different C environment. These runtime DLLs are even necessary if you do not call anything in any of your functions, since the DLL makes various initialization steps when it gets loaded and that references some C runtime functions nevertheless. Compiling and linking the DLL with static C runtime is usually also not a clean solution since the linked in C runtime will then reference Windows APIs that are not available on LabVIEW RT.
    Depending on your version of LabVIEW RT you will have some mscvrtxx.dll files in your system directory but it will be an older one than from the latest Visual Studio version. If you can compile your DLL with that Visual Studio version then you should be fine, but could possibly run into new problems if you later upgrade to a newer LabVIEW RT version. Installing the C runtime distributables from newer Visual Studio versions is unfortunately also not a solution, since it references many (undocumented) Windows API functions that are not available in LabVIEW RT.
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • How to create a Real Time Interactive Business Intelligence Solution in SharePoint 2013

    Hi Experts,
    I was recently given the below requirements to architect/implement a business intelligence solution that deals with instant/real data modifications in data sources. After going through many articles, e-books, expert blogs, I am still unable to piece the
    right information to design an effective solution to my problem. Also, client is ready to invest in the best 
    infrastructure in order to achieve all the below requirements but yet it seems like a sword of Damocles that hangs around my neck in every direction I go.
    Requirements
    1) Reports must be created against many-to-many table relationships and against multiple data sources(SP Lists, SQL Server Custom Databases, External Databases).
    2) The Report and Dashboard pages should refresh/reflect with real time data immediately as and when changes are made to the data sources.
    3) The Reports should be cross-browser compatible(must work in google chrome, safari, firefox and IE), cross-platform(MAC, Android, Linux, Windows) and cross-device compatible(Tabs, Laptops &
    Mobiles).
    4) Client is Branding/UI conscious and wants the reports to look animated and pixel perfect similar to what's possible to create today in Excel 2013.
    5) The reports must be interactive, parameterized, slice able, must load fast and have the ability to drill down or expand.
    6) Client wants to leverage the Web Content Management, Document Management, Workflow abilities & other features of SharePoint with key focus being on the reporting solution.
    7) Client wants the reports to be scalable, durable, secure and other standard needs.
    Is SharePoint 2013 Business Intelligence a good candidate? I see the below limitations with the Product to achieve all the above requirements.
    a) Cannot use Power Pivot with Excel deployed to SharePoint as the minimum granularity of refresh schedule is Daily. This violates Requirement 1.
    b) Excel Services, Performance Point or Power View works as in-memory representation mode. This violates Requirement 1 and 2.
    b) SSRS does not render the reports as stated above in requirement 3 and 4. The report rendering on the page is very slow for sample data itself. This violates Requirement 6 and 7.
    Has someone been able to achieve all of the above requirements using SharePoint 2013 platform or any other platform. Please let me know the best possible solution. If possible, redirect me to whitepapers, articles, material that will help me design a effective
    solution. Eagerly looking forward to hear from you experts!.
    Please feel free to write in case you have any comments/clarifications.
    Thanks, 
    Bhargav

    Hi Experts,
    Request your valuable inputs and support on achieving the above requirements.
    Looking forward for your responses.
    Thanks,
    Bhargav

Maybe you are looking for

  • Purchased music no longer available for redownload

    I looked at my iTunes cloud purchases today and noticed that 1 full album was missing, and several others were missing many of the tracks in the albums.  All of the albums are available for purchase on iTunes, but are not available for redownload on

  • Setting up authentication for client proxy in SOAMANAGER

    Hi all,           I have a webservice in .NET system and i have created Client proxy in ABAP.           I have created logical port also.           When i am testing the service I am getting a POP-UP to enter username and password.           Is there

  • Music from My Ipod to my new computer

    Ok, So i have a new computer ans i would love to get my music from my ipod to the new computer. I would do it the way on the site with using my ipod as a disk etc etc, but the other computer died. I cant therefore get onto my old computer. Any ideas

  • Error: We apologize for the inconvenience but Windows did not start successfully

    when i open my desktop computer,it dosent start up normal i get this message ( we apologize for the inconvenience ,but windows did not start successfully. a recent hardware or software change might have caused this.  If your computer stopped respondi

  • Save as in Maverick

    Is there a way to "save as" in Maverick?  Why would they disable ths feature. I create invoices in Numbers every week using the prior week as a template. Sometmes only the date changes and then I click save as and rename the invoice for that week. No