Parallel computatio​n - real time analysis

Im dooing real time image analysis, however my analysis program is the bottleneck at the moment so i want to increase its speed. Since I need an increase of about 60-80% the only solution would be dooing the analysis in parallel on a dual core cpu (currently running on AMD64 3700+).
The most time consuming computations in my program are two convoluitons so my idear is to have them running in parallel. I want to grab one image, do the first convolution and then send the result to a shift register. Next the program should (in parallel) grab a new image and to the convolution - and do the second  convolution on the first image.
Is this possible?
As you can see in the attached vi Im trying to transfer the images in shift registers is this the right way?
How do I do the wireing of the error correctly?
How do I make sure this can actually run in parallel?
Hope someone can help me.
Simon
PS: This version of the program does not grab images, it just loads in previosly saved images. That way I can test the results.
Attachments:
speed test4.vi ‏52 KB

Hi, if you want to have thoses convolute functions running in parallel, you will need to give them their own independent error in. That means the seconde convolute function shouldn't get its input from the first one.
I have modified your code so the two convolute functions are now running in parallel. From the attached result.jpg I can see a improvment at the VI execution time.
But I'm not sure about result, it seems that somthing is missing from the output array.
Please let me know if you have found a better solution to this problem.
Regards
Dennis Morini
Field Sales Engineer
National Instruments Denmark
http://www.ni.com/ask
Attachments:
speed test4_mod.vi ‏53 KB
Result.JPG ‏93 KB

Similar Messages

  • Continuous data acquisition and analysis in real time

    Hi all,
    This is a VI for the continous acquisition of an ECG signal. As far as I understand the DAQmx Analog read VI needs to be placed inside a while loop so it can acquire the data continously, I need to perform filtering and analysis of the waveform in real time. The way I set up the block diagram means that the data stays int the while loop, and as far as I know the data will be transfered out through the data tunnels once the loop finishes executing, clearly this is not real time data processing. 
    The only way I can think of fixing this problem is by placing another while loop that covers the filtering the stage VIs and using some sort of shift registeing to pass the data to the second loop. My questions is whether or not this would introduce some sort of delay, and wether or not this would be considered to be real time processing. Would it be better to place all the VIs (aquicition and filtering) inside one while loop? or is this bad programming practice. Other functions that I need to perform is to save the data i na file but only when the user wants to do so. 
    Any advice would be appriciated. 
    Solved!
    Go to Solution.

    You have two options:
    A.  As you mentioned, you can place code inside your current while loop to do the processing.  If you are clever, you won't need to place another while loop inside your existing one (nested loops).  But that totally depends on the type of processing you are doing.
    B.  Create a second parallel loop to do the processing.  This would decouple the processes to ensure that the processing does not hinder your acquisition.  See here for more info.
    Your choice really depends on the processing that you plan to perform.  If it is processor-intensive, it might introduce delays as you mentioned.  
    I would reccomend you first try to place everything in the first loop, and see if your DAQ buffer overflows (you can monitor the buffer backlog while its running).  If so, then you should decouple the processes into separate loops.
    Regarding whether or not "this would be considered to be real time processing" is a loaded question.  Most people on these forums will say that your system will NEVER be real-time because you are using a desktop PC to perform the processing (note:  I am assuming this is code running on a laptop or desktop?).  It is not a deterministic system and your data is already "old" by the time it exits your DAQ buffer.  But the answer to your question really depends on how you define "real-time processing".  Many lay-people will define it as the processing of "live data" ... but what is "live data"?

  • Can you provide some real time example of Fit Gap analysis .

    Hi all,
    Can you provide some real time example of Fit Gap analysis related to Functional.
    Regards
    Reddy

    hi,
    In my opinion and experience both functional and technical staff playkey roles in the fit-gap meetings and in the subsequent review of thefit-gap deliverable (which lists business requirements, fits, gaps, andalternatives to fill the gaps). Ideally the technical staff possessessignificant functional / business experience at least with that moduleso that he/she can help determine the cost/effort/technical scope if acustomization or interface is the preferred way to fill a particulargap. The fit-gap goes a long way in determining the overall scope ofthe implementation or upgrade project and is a critical success factor.It also needs to be completed very early in the project so thatresulting customizations and interfaces can be designed, created, andunit tested before the system test phase.
    Thx,
    waseem

  • Is vision development module in labview 8.6. sufficient for real-time image acquisition and analysis using a webcam

    Hi, 
    I'm new to labview and trying to develop an eye-tracker using labview 8.6. It has the vision development module and i was wondering if this was sufficient for real-time image acquisition and processing or would i be needing any other software tools.
    Solved!
    Go to Solution.

    Hello, certainly it is possible and sufficient for real-time tracking!
    About eye tracking - if you need an example, you can find the code here:
    https://decibel.ni.com/content/blogs/kl3m3n/2013/10/08/real-time-face-and-eye-detection-in-labview-u...
    The code uses OpenCV functionalities along with the LabView UI (and some other functions like overlay).
    Hope this helps a bit.
    Best regards,
    K
    https://decibel.ni.com/content/blogs/kl3m3n
    "Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."

  • Can I acquire and analyse in real time with regular Labview?

    I have to acquire samples (which vary cyclically in a roughly sinusoidal fashion) from a sensor, and check every sample to see if it is the minimum (the valley) of a cycle. If it is, and it does not fall within an expected range, I have to take a corrective action that involves rejecting the part that was just measured as well as a 30 more parts (to make sure that the defective part has been rejected). The signal from the sensor is not very noisy, but beause of the nature of the measured object, there could be local minima and maxima. To guard against that, a point is considered to be a valley only if subsequent readings deviate above that point by a certain amount. If a part is indeed defective,
    a digital out put has to issued to reject that part.
    Can all these be done using regular Labview (not RT)? I tried it out with a proto-type VI, using DAQmx vis, continously acquiring samples, but reading one sample at a time from within a loop (the VI I used is attached with this question). The result has been disappointing, since each time the loop executes there is a delay that keeps building up. Finally, even after the part feed has been turned off I can see Labview processing signals from parts that have long since gone past the measuring head.
    Another perplexing thing I found is that the time taken to execute the while loop in the vi is not consistent; it takes anything from 6 to 50ms to execute.
    I will need atleast 8-12 samples from a part to build its profile, and the feed rate is about 3000 parts per minute. I am using Labview 7.0 with an NI-6013 card in a Windows 2000 environment.
    Thanks for any suggestions / recommendations.
    Attachments:
    find_trough_2.vi ‏378 KB

    Hello,
    Thank you for your suggestions; I had already resigned myself to going for a Real Time system, your answer convinced me to commit myself to it!
    That said, your reply leads to a couple of (related) questions...
    1. Your point regarding the use of Local Variables is well taken; I have been repeatedly told at various training sessions how the necessity of updating the LVs during each loop iteration slows the computation time. However, what alternative do we have when there are several controls to which we have to write AND read data multiple times during a loop iteration, and perform different computations based on the value held by these controls? (You have seen the VI I attached with the original question). Some of these conditional
    computations further change the value of the controls. Does Labview have any other mechanism to store and manipulate the intermediate value from a computation?
    2. I did a simple experiment to determine the average loop time, and the results were surprising. I placed the entire content of the VI I used (Find the Valley in cycle.vi) in a stacked sequence structure, and wired the index counter "i" to a control to count the number of iterations the loop executes. I placed a frame before this with a tick count instruction to get the start time of the loop, and a frame after this to get the end time of the loop. Dividng the difference of these with the number of iterations, I got the average loop time to be around 1.2ms! Am I interpreting my results incorrectly?
    Thnaks once again for your response. I would really appreciate your views on the questions I have raised in this comment
    Regards
    Arun P. Madangarli

  • Independent 7 Year LabView (3.0-7.1) and Real-Time Programmer w/ Signal Analysis and Audio/Video/Imaging Background

    I have been a LabView programmer since version 3.1 and now into version 7.1 - including the Real-Time platform. I also have developed database connectivity in Perl/CGI/HTML in conjunction with this experience.
    I also have over a decade of experience in Audio, Video and Imaging technology as a systems designer and production engineer. I am currently a part-time employee and an independent contractor. My company name is The Oakland Group. I am available for both short-term full-time and long-term part-time projects.
    I look forward to discussing any LabView project with you.
    Tom Held
    [email protected]
    414-964-0518

    Please see my attached resume.
    Tom Held
    Attachments:
    resume-scada.pdf ‏43 KB

  • Need your help !!! -- Authorization error for Real-Time Bex query

    Dear experts:
    I have a Bex query built on a multiprovider, this multiprovider is consist of one standard cube and one real-time cube(virtual provider). When I run this query, I can retrive the data and no error occured, but when others execute this query, they will get the error message below:
    Operation Generate a Request could not be carried out for DTP DTP_D7JVT8DGBQWPWL13VIUITNODG
    You do not have authorization for the data transfer process
    Errors occurred during parallel processing of query 2, RC: 3
    Error while reading data; navigation is possible
    Row: 54 Inc: WRITE_MESSAGES Prog: CL_RSDR_AT_QUERY
    I used tcode rsecadmin to track the authorization, when I execute as user XXX, I can get the same error above, but when I go to "Display Log", no error showed there.
    Did anybody meet the same error before? How to resolve that?
    Any post would be appreciated and thank you all in advance!
    Tim

    I think there is some missing authorization,here
    the user has no access to execute the DTP.
    You can analyse it in rsecadmin or also see
    which object you are getting in su53.
    Thanks,
    Saveen

  • How to Integrate real time data between 2 database servers

    How to Integrate real time data between 2 database servers
    May 31, 2006 2:45 AM
    I have a scenario where the data base (DB2 400) is maintained by AS 400 application and my new website application based on j2ee platform access the same database also but the performance is very low. So we have thought of introducing new oracle data base which will be accessed by j2ee application and all the data from db 400 database will be replicate to oracle data base. In that scenario the only problem is of real time data exchange between 2 databases. How do we achieve that considering both the application As400 and j2ee website application are running in parallel and accessing the same information lying on DB2 400 database. We have to look at transaction management also.
    Thanks
    Panky
    DrClap
    Posts:25,835
    Registered: 4/30/99 Re: How to Integrate real time data between 2 database servers
    May 31, 2006 11:16 AM (reply 1 of 2)
    You certainly wouldn't use XML for this.
    The process you're looking for is called "replication". Ask your database experts about it.
    I predict that after you spend all the money to install Oracle and hire consultants to make it replicate the DB2/400 database, your performance problem will be worse.
    panks
    Posts:1
    Registered: 5/31/06 Re: How to Integrate real time data between 2 database servers
    May 31, 2006 11:55 PM (reply 2 of 2)
    Yeajh I now that its not a XML solution.
    Replication is one of the option but AS400 application which uses DB2/400 DB is highly loaded and proposed website also uses the same database for retrieval and updation purpose.All the inventory is maintained in the DB2/400 database so I have thought of introducing new oracle database which will be accessed by new website and it will have all the relevant tables structure along with data from DB2/400 application. Now whenever there is a order placement from new website then first it should update the oracle database and then this data shuold also migrate to db2/400 application at real time so that the main inventory which is lying on db2/400 should be updated on real time basis because order placement is aslo possible from As400 application. So the user from As400 application should not get the wrong data.
    Is it possible to use MQ products??
    -Panky

    Hi,
    the answer to your question is not easy. Synchronization or integration or replication data between 2 (or more) database servers is very complicated task, even though it doesn't look like.
    Firstly I would recommend to create good analysis regarding data flow.
    Important things are:
    1) what is primary side for data creation. In other words on which side - DB2 or Oracle - are primary data (they are created here) and on which side are secondary data (just copies)
    2) on which side are data changed - only in DB2 side or only on Oracle side or on both sides
    3) Are there data which are changed on both side concurrently? If so how should be conflicts solved?
    4) What does it mean "real time"? Is it up to 1 ms or 1s or 1 min or 1 hour?
    5) What should be done when replication will not work? I mean replication crash etc.
    BTW. The word "change" above means INSERT, UPDATE, DELETE commands.
    Analysis should be done for every column in every table. When analysis is ready you can select the best system for your solution (Oracle replication, Sybase replication server, MQ, EJB or your proprietary solution). Without analysis it will be IMHO gunshot into the dark.

  • Hi Experts! Clarififcation regardng the phases of project in real time

    Hi ,
    Can any body please explain the phases of project and thier details like wht all will be done at each stage in the real time since i am very new to tht kind of phases ..
    Please donot kindly send me any links for reference rather plz describe it in detail..
    Regards,
    Eshwant....

    Hi,
    Implementation processes:
    Project preparation
    The project preparation phase, depicted below, focuses at two main activities, i.e. to make a setup for the TSO and to define a solution vision. These activities allow an organization to put in on the right track towards implementation.
    Design and initially staff the SAP TSO
    TSO chart exampleThe first major step of the project preparation phase is to design and initially staff an SAP technical support organization (TSO), which is the organization that is charged with addressing, designing, implementing and supporting the SAP solution. This can be programmers, project management, database administrators, test teams, etc. At this point, the focus should be at staffing the key positions of the TSO, e.g. the high-level project team and SAP professionals like the senior database administrator and the solution architect. Next to that, this is the time to make decisions about choosing for internal staff members or external consultants.
    The image at the right shows a typical TSO chart.
    Craft solution vision
    The second project preparation job is to define a so-called solution vision, i.e. a vision of the future-state of the SAP solution, where it is important to address both business and financial requirements (budgets). The main focus within the vision should be on the company’s core business and how the SAP solution will better enable that core business to be successful. Next to that, the shortcomings of the current systems should be described and short but clear requirements should be provided regarding availability (uptime), security, manageability and scalability of the SAP system.
    Sizing and blueprinting
    The next phase is often referred to as the sizing and blueprinting phase and forms the main chunk of the implementation process
    Perform cost of ownership analysis
    Figure 5: Solution stack delta analysisThis phase starts with performing a total cost of ownership analysis (TCO analysis) to determine how to get the best business solution at the lowest costs. This means to compare SAP solution stack options and alternatives and then determine what costs each part of the stack will bring and when these costs will be incurred. Parts of the stack are for example the hardware, operating system and database, which form the acquisition costs. Next to that, there should be taken a look at recurring costs like maintenance costs and downtime costs. Instead of performing a complete TCO analysis for various solution stack alternatives that would like to compare, it can be wise just to do a so-called delta analysis, where only the differences between solutions (stacks) are identified and analyzed. The image at the right depicts the essence of a delta analysis.
    Identify high availability and disaster recovery requirements
    The next step is identifying the high availability requirements and the more serious disaster recovery requirements. This is to plan what to do with later downtime of the SAP system, caused by e.g. hardware failures, application failures or power outages. It should be noted that it is very important to calculate the cost of downtime, so that an organization has a good idea of its actual availability requirements.
    Engage SAP solution stack vendors
    Figure 6: Simplified SAP solution stackA true sizing process is to engage the SAP solution stack vendors, which is the next step. This means selecting the best SAP hardware and software technology partners for all layers and components of the solution stack, based on a side-by-side sizing comparison. The most important factors that are of influence here are the estimated numbers of (concurrent) users and batch sizes. A wise thing to do is to involve SAP AG itself to let them create a sizing proposal stating the advised solution stack, before moving to SAP’s technology partners/SAP vendors, like HP, Sun Microsystems and IBM. A simplified solution stack is depicted at the right, showing the many layers for which software and hardware has to be acquired. Note the overlap with the OSI model.
    Staff TSO
    The TSO is the most important resource for an organization that is implementing SAP, so staffing the TSO is a vital job which can consume a lot of time. In a previous phase, the organization should already have staffed the most vital positions. At this point the organization should staff the bulk of the TSO, i.e. fill the positions that directly support the near-term objectives of the implementation, which are to develop and begin the installation/implementation of the SAP data center. Examples are: data center experts, network infrastructure experts, security specialists and database administration experts.
    There are many ways to find the right people within or outside the organization for all of the TSO positions and it depends on the organization how much time it wants to spend on staffing.
    Training
    One of the most vital stages of the implementation process is training. Very few people within an organization are SAP experts or even have worked with SAP software. It is therefore very important to train the end users but especially the SAP TSO: the people who design and implement the solution. Many people within the TSO need all kinds of training. Some examples of these positions:
    SAP Network Specialists
    SAP Database Administrators
    SAP Security specialists
    Documentation specialists
    Et cetera
    All of these people need to acquire the required SAP knowledge and skills or even SAP certifications through training. Moreover, people need to learn to do business in a totally new way. To define how much SAP training every person needs, a company can make use of a skillset matrix. With this matrix, a manager can identify who possesses what knowledge, to manage and plan training, by defining the height of expertise with a number between e.g. 1 and 4 for each skill for each employee.
    Setup SAP data center
    The next step is to set up the SAP data center. This means either building a new data center facility or transforming the current data center into a foundation capable of supporting the SAP solution stack, i.e. all of the technology layers and components (SAP software products) in a productive SAP installation. The most important factor when designing the data center is availability. The high availability and disaster recovery requirements which should have been defined earlier, give a good idea of the required data center requirements to host the SAP software. Data center requirements can be a:
    Physical requirement like power requirements
    Rack requirement
    Network infrastructure requirement or
    Requirement to the network server.
    Perform installations
    The following step is to install the required SAP software parts which are called components and technological foundations like a web application server or enterprise portals, to a state ready for business process configuration. The most vital sub steps are to prepare your OS, prepare the database server and then start installing SAP software. Here it is very important to use installation guides, which are published for each SAP component or technology solution by SAP AG. Examples of SAP components are:
    R/3 Enterprise — Transaction Processing
    mySAP BI — Business Information Warehouse
    mySAP CRM — Customer Relationship Management
    mySAP KW — Knowledge Warehouse
    mySAP PLM — Product Lifecycle Management
    mySAP SCM — Supply Chain Management
    mySAP SEM — Strategic Enterprise Management
    mySAP SRM — Supplier Relationship Management
    Round out support for SAP
    Before moving into the functional development phase, the organization should identify and staff the remaining TSO roles, e.g. roles that relate to helpdesk work and other such support providing work.
    [edit] Functional development
    The next phase is the functional development phase, where it is all about change management and testing. This phase is depicted below.
    Figure 7: Functional development phase
    Address change management
    The next challenge for an organization is all about change management / change control, which means to develop a planned approach to the changes the organization faces. The objective here is to maximize the collective efforts of all people involved in the change and to minimize the risk of failure of implementing the changes related to the SAP implementation.
    The implementation of SAP software will most surely come with many changes and an organization can expect many natural reactions, i.e. denial, to these changes. To fight this, it is most important to create a solid project team dedicated to change management and to communicate the solution vision and goals of this team. This team should be prepared to handle the many change issues that come from various sources like:
    End-user requests
    Operations
    Data center team
    DBA group
    Systems management
    SAP systems and operations management
    Next thing is to create a foundation for the SAP systems management and SAP computer operations, by creating a SAP operations manual and by evaluating SAP management applications. The manual is a collection of current state system documentation, day-to-day and other regularly scheduled operations tasks, various installation and operations checklists and how-to process documents.
    Functional, integration and regression testing
    Testing is very important before going live with any system. Before going live with a SAP system, it is vital to do many different kinds of testing, since there is often a large, complex infrastructure of hardware and software involved. Both requirements as well as quality parameters are to be tested. Important types of testing are:
    Functional testing: to test using functional use cases, i.e. a set of conditions or variables under which a tester will determine if a certain business process works
    Integration testing
    Regression testing
    All tests should be preceded by creating solid test plans.
    [edit] Final preparation
    The last phase before going live can be referred to as the final preparation phase and is depicted below.
    Figure 8: Final preparation phase
    Systems and stress testing
    Another vital preparation activity before going live with SAP is systems and stress testing. This means planning, scripting, executing and monitoring system and stress tests, to see if the expectations of the end users, defined in service level agreements, will be met. This can be done with SAP’s standard application benchmarks, to benchmark the organization’s configurations against configurations that have been tested by SAP’s hardware technology partners. Again, a test plan should be created at first.
    Prepare for cutover
    The final phase before going live with SAP is often referred to as the cutover phase, which is the process of transitioning from one system to a new one. The organization needs to plan, prepare and execute the cutover, by creating a cutover plan that describes all cutover tasks that have to be performed before the actual go-live. Examples of cutover tasks are:
    Review and update all systems-related operations procedures like backup policies and system monitoring
    Assign ownership of SAP’s functional processes to individuals
    Let SAP AG do a GoingLive check, to get their blessing to go live with the system
    Lock down the system, i.e. do not make any more changes to the SAP system
    [edit] Go Live
    All of the previously described phases all lead towards this final moment: the go-live. Go-live means to turn on the SAP system for the end-users and to obtain feedback on the solution and to monitor the solution. It is also the moment where product software adoption comes into play. More information on this topic:
    Product Software Adoption: Big Bang Adoption
    Product Software Adoption: Parallel Adoption
    Product Software Adoption: Phased Adoption
    HTH
    Regards,
    Dhruv Shah

  • InDesign auto-size frame feature not working in real time in InCopy why?

    We have just recently migrated from InCopy CS4 to CS6 to take advantage of the new features like the auto resize frame option, however it now seems that this feature is not working in real-time.
    Basically the steps are needed to be complete before it auto-resizes the frame in InCopy, we use both layout and assignment based workflows:
    1. From an ID document ('doc1'), exported a 'layer' to IC, certain frames are set to auto-size in height using the text frame options. So that editorial can review and make changes to text and the frame should resize according to the specifications set. IC stories are saved to a folder located in a content folder inside the top issue working folder.
    2. Editorial opens the IC software, then opens the ID 'doc1'. Check’s out correct .icml file and makes edits to frame with auto resize.
    3. Frame does not resize according to text frame set options and InCopy file does not respond in same fashion as InDesign.
    4. Change only occurs when InCopy file is closed and updated in InDesign, which is frustrating as this feature would save huge amounts of time serving editorial requests.
    Has anybody experienced this type of workflow problem? If anyone can provide mw with some pointers as to what can I do to get this to update in real time perhaps run a script? Update file in InCopy and refresh I will very much appreciate their assistance. I have run out of ideas.
    Thanks!

    We've had all sorts of problems with this feature as it should've worked straight out of the box but after some testing we have found that its something to do with the way you open the actual file in InCopy. Which is far from ideal and should have been UAT by Adobe before release.
    This will not work consistently work if you open the designed .indd or .icma file in InCopy using the file open command within the application.
    If you need this to work, the InCopy user has to open the .indd or .icma file by dragging and droping from OS windows explorer into InCopy, we use Windows 7 acrros all the teams. Check out .icml files add text changes to the set auto resized frames, this process will expand/collapse the frames to fit the content but as you have to use the drag and drop method to open the .indd and .icma file, 2 users cannot access the same time doc at the same time (a serious flaw in the programming architecture!) which stops people working in parallel. Save changes, check in .icml content and close .indd or .icma.
    However the flaw comes in if you then open the .indd and .icma file in InCopy using the file open command within the application, before an InDesign user opens and saves the file (updates the design). The corrections added in the previous stage above, will not show the frames expanded/collapsed to take in the added text and instead show over matter???? The only way around this is to ask an InDesign user to open, update and save the design that way the InCopy user will see the same result no matter what file open method they use.
    Another suggestion is to design the page to have some of the auto resize frames anchored within main body of text and that way the frames will expland/collapse when checking out and editing the content. However, this does cause issues with InDesign crashing etc. so we have tried to stop this method within the working group.
    Have you experienced other more serious issues with InDesign crashing consistently when re-importing .icml files? See other forums here:
    http://forums.adobe.com/thread/671820?start=80&tstart=0
    http://forums.adobe.com/message/5045608#5045608
    As far as we can see this is a major flaw in how the application(s) work, we have an enterprise agreement with Adobe and purchase a large volume of Adobe products globally but so far the technical support team are unable to find a solution to this and I'm not hopeful of any resolution soon even with the new release of Adobe CC.

  • Process Chain for Real Time Demon

    Please help I am stuck I followed the step by sdn but this is missing in step. how to create now process chain.
    I created the below
    DSO CONNECTED TO dATASOURCE via Trans,
    Real Time IP
    Real Time DTP
    assigned to Datasource and assigned the DS, IP, DTP to Deamon in RSRDA. NOW I started also manually via start all IP. but How to set the process chains now.
    PLEASE HELP ME STEP BY STEP TO PROCESS CHAIN SINCE i am new to this daemon in process chains
    Thanks
    Soniya
    null

    Hi
    refer to this
    CREATION OF PROCESS CHAINS
    Process chains are used to automated the loading process.
    Will be used in all applications as you cannot schedule hundreds of infopackages manually and daily.
    Metachain
    Steps for Metachain :
    1. Start ( In this variant set ur schedule times for this metachain )
    2.Local Process Chain 1 ( Say its a master data process chain - Get into the start variant of this chain ( Sub chain - like any other chain ) and check the second radio button " Start using metachain or API " )
    3.Local Process Chain 2 ( Say its a transaction data process chain do the same as in step 2 )
    Steps for Process Chains in BI 7.0 for a Cube.
    1. Start
    2. Execute Infopackage
    3. Delete Indexes for Cube
    4.Execute DTP
    5. Create Indexes for Cube
    For DSO
    1. Start
    2. Execute Infopackage
    3. Execute DTP
    5. Activate DSO
    For an IO
    1. Start
    2.Execute infopackage
    3.Execute DTP
    4.Attribute Change Run
    Data to Cube thru a DSO
    1. Start
    2. Execute Infopackage ( loads till psa )
    3.Execute DTP ( to load DSO frm PSA )
    4.Activate DSO
    5.Further Processing
    6.Delete Indexes for Cube
    7.Execute DTP ( to load Cube frm DSO )
    8.Create Indexes for Cube
    3.X
    Master loading ( Attr, Text, Hierarchies )
    Steps :
    1.Start
    2. Execute Infopackage ( say if you are loading 2 IO's just have them all parallel )
    3.You might want to load in seq - Attributes - Texts - Hierarchies
    4.And ( Connecting all Infopackages )
    5.Attribute Change Run ( add all relevant IO's ).
    Start
    Infopackge1A(Attr)|Infopackge2A(Attr)
    Infopackge1B(Txts)|Infopackge2B(Txts)
    /_____________________|
    Infopackge1C(Txts)______|
    \_____________________|
    \___________________|
    __\___________________|
    ___\__________________|
    ______ And Processer_ ( Connect Infopackge1C & Infopackge2B )
    __________|__________
    Attribute Change Run ( Add Infobject 1 & Infoobject 2 to this variant )
    1. Start
    2. Delete Indexes for Cube
    3. Execute Infopackage
    4.Create Indexes for Cube
    For DSO
    1. Start
    2. Execute Infopackage
    3. Activate DSO
    For an IO
    1.Start
    2.Execute infopackage
    3.Attribute Change Run
    Data to Cube thru a DSO
    1. Start
    2. Execute Infopackage
    3.Activate DSO
    5.Further Processing
    6.Delete Indexes for Cube
    7.Execute Infopackage
    8.Create Indexes for Cube

  • UCCX 10.5 Real Time Report shows "Not Connected"

    Hi,
    UCCX 10.5 HA over LAN.
    As per the title, RTR shows Not connected for both nodes and no data shown for all reports. all services are In service in both nodes. 
    Any Ideas?
    Regards.

    Hi Mohamed,
    Are you accessing this via the IE or FF? If so which version?
    The Real time reporting on UCCX connects to the RMI port 6999 to pull the information on the Engine. If there are no port blockages or if there are no issues with the browsers, then it is a problem with the configuration files in root holding this information which needs in depth analysis.
    A restart of the Engine also should help. I know that everything is IN SERVICE, but I have noticed a lot of issues on UCCX 10.5 when there are stuck calls on the system, it impacts RTR with the midnight stats reset.
    The reset of the Engine is bound to clear all the stuck calls in the system and this may be a side effect of the stuck calls on the system especially if the RTR was working before you noticed the problems.
    Regards,
    Arundeep

  • BAdI........How are they used in the real time scenario.

    Im a rookie in ABAP development and i was wondering that how we use the BAdI enhancement technique in the real time scenarios. Like do we get create them or we have few already built BAdI's which we work on? Can any body please explain how this works.

    Hi Ramana,
    Business add-ins are enhancements to the standard version of the system.
    Business Add-In is a new SAP enhancement technique based on ABAP Objects.
    They can be inserted into the SAP system based on specific user requirements.
    Each Business Add-In has:
    • at least one Business Add-In definition
    • a Business Add-In interface
    • a Business Add-In class that implements the interface
    In order to enhance a program, a Business Add-In must first be defined
    Subsequently two classes are automatically generated:
    • An interface with ‘IF_EX_’ inserted between the first and second characters of the BADI name.
    • An adapter class with ‘CL_EX_’ inserted between the first and second characters of the BADI name.
    The Application developer creates an interface for this Add-In.
    There are multiple ways of searching for BADI.
    • Finding BADI Using CL_EXITHANDLER=>GET_INSTANCE
    • Finding BADI Using SQL Trace (TCODE-ST05).
    • Finding BADI Using Repository Information System (TCODE- SE84).
    1. Go to the Transaction, for which we want to find the BADI, take the example of Transaction VD02. Click on System->Status. Double click on the program name. Once inside the program search for ‘CL_EXITHANDLER=>GET_INSTANCE’.
    Make sure the radio button “In main program” is checked. A list of all the programs with call to the BADI’s will be listed.
    The export parameter ‘EXIT_NAME’ for the method GET_INSTANCE of class CL_EXITHANDLER will have the user exit assigned to it. The changing parameter ‘INSTANCE’ will have the interface assigned to it. Double click on the method to enter the source code.Definition of Instance would give you the Interface name.
    2. Start transaction ST05 (Performance Analysis).
    Set flag field "Buffer trace"
    Remark: We need to trace also the buffer calls, because BADI database tables are buffered. (Especially view V_EXT_IMP and V_EXT_ACT)
    Push the button "Activate Trace". Start transaction VA02 in a new GUI session. Go back to the Performance trace session.
    Push the button "Deactivate Trace".
    Push the button "Display Trace".
    The popup screen "Set Restrictions for Displaying Trace" appears.
    Now, filter the trace on Objects:
    • V_EXT_IMP
    • V_EXT_ACT
    Push button "Multiple selections" button behind field Objects
    Fill: V_EXT_IMP and V_EXT_ACT
    All the interface class names of view V_EXT_IMP start with IF_EX_. This is the standard SAP prefix for BADI class interfaces. The BADI name is after the IF_EX_.
    So the BADI name of IF_EX_CUSTOMER_ADD_DATA is CUSTOMER_ADD_DATA
    3. Go to “Maintain Transaction” (TCODE- SE93).
    Enter the Transaction VD02 for which you want to find BADI.
    Click on the Display push buttons.
    Get the Package Name. (Package VS in this case)
    Go to TCode: SE84->Enhancements->Business Add-inns->Definition
    Enter the Package Name and Execute.
    Here you get a list of all the Enhancement BADI’s for the given package MB.
    Have a look at http://help.sap.com/saphelp_nw04/helpdata/en/04/f3683c05ea4464e10000000a114084/content.htm
    http://help.sap.com/saphelp_erp2005/helpdata/en/73/7e7941601b1d09e10000000a155106/frameset.htm
    http://support.sas.com/rnd/papers/sugi30/SAP.ppt
    http://www.sts.tu-harburg.de/teaching/sap_r3/ABAP4/abapindx.htm
    http://members.aol.com/_ht_a/skarkada/sap/
    http://www.ct-software.com/reportpool_frame.htm
    http://www.saphelp.com/SAP_Technical.htm
    http://www.kabai.com/abaps/q.htm
    http://www.guidancetech.com/people/holland/sap/abap/
    http://www.planetsap.com/download_abap_programs.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/c8/1975cc43b111d1896f0000e8322d00/content.htm
    /people/thomas.weiss/blog/2006/04/03/how-to-define-a-new-badi-within-the-enhancement-framework--part-3-of-the-series
    /people/thomas.weiss/blog/2006/04/18/how-to-implement-a-badi-and-how-to-use-a-filter--part-4-of-the-series-on-the-new-enhancement-framework
    How to develop BADI
    Reward if useful.
    Thanks
    Aneesh.

  • Scheduling in Real-Time Java

    Hello,
    I have some questions concerning how scheduling in fact is intended to be performed in a RTSJ based Real-Time Java System.
    As far as I understood, RTSJ requires pre-emptive priority-based dispatching of Schedulable objects.
    This means that the execution eligibility of a schedulable entity is mainly its priority.
    That causality is reflected within the specification with the (one-and-only specified) PriorityScheduler, which is the base scheduler for actual Real-Time Java applications.
    Furthermore, there is a notion of extensibility of that PriorityScheduler described by RTSJ,
    in order to provide further scheduling mechanims and feasibility analysis algorithms (please correct me if there are any wrong assumptions).
    This is the point, where everything becomes really weird to me ...
    As far as I could investigate, in most RTSJ implementations based on a POSIX compliant system underneath (like Java RTS does on RTLinux or Solaris)
    each (Realtime)JavaThread is mapped 1-to-1 to a light-weight process on the operating system level (e.g. a pthread).
    So far, we have no "green threads" within the JVM, but real LWPs scheduled by the OS.
    The difference between "normal" and "real-time" threads lies in the scheduling policy used for that mapping.
    While normal Java threads probably map to SCHED_RR or SCHED_OTHER, real-time threads are scheduled by the OS via the SCHED_FIFO policy in order to achieve a better real-time predictability.
    However, the OS's scheduling mechanisms automatically make decisions about the right positioning of a LWP within an appropriate run-queue, due to thread's preemption, blocking or release (even dynamic priority changes) activities and its scheduling policy.
    That's exactly why I ask myself, what is the need of a Scheduler representation within a JVM?
    Furthermore, how a Scheduler extension is able to incorporate with the threading model and the underlying scheduling mechanisms of the OS?
    One point could be a situation where a real-time JVM runs directly on top of the bare hardware and has to perform scheduling decisions on its own.
    The Scheduler API could then be understood as an extension mechanism of a kind of JVM-intern scheduler (e.g. the PriorityScheduler), thereby allowing scheduling decisions to be made even in user defined Scheduler implementations.
    A similar use case for an OS-based scenario could be if a JVM is intended to pass scheduling/threading routines of the underlying OS (eg. a part of the POSIX API)
    up to the Java application level in order to provide the opportunity for a kind of application defined scheduling (like e.g. in the MaRTE OS).
    Unfortunatelly, after introspecting the RTSJ API, both conclusions seem to me to be wrong.
    So far, Java RTS seems not to provide any mechanism for reaction on scheduling events/decisions, neither intra-JVM nor from an underlying OS outside of the JVM.
    Furthermore, there is no notion for incorporation with the base PriorityScheduler for making extended scheduling decisions.
    I hope this post could bring me more light into the scheduling idea behind Real-Time Java systems as intnded by the RTSJ.
    Sincerely,
    Vladimir

    Vladimir.Nikolov wrote:
    That means, that a scheduling policy different to PriorityScheduler can only be assigned to a Schedulable object if it is supported by the OS and the JVM?Well it has to be supported by that implementation of the RTSJ. Howe that is done - ie whether it requires OS support - depends on that VM, the OS and the actual scheduling policy.
    It also seems that at the current state of the art the PriorityScheduler representative within the JVM is intended only for manipulating a feasibility set of Schedulable objects (supporting online feasibility analysis)?
    However, since user-defined scheduling is not intended by the specification, applications have to rely on the feasibility analysis based on the underlying/supported scheduling mechanisms.
    Thus, in the current Java RTS implementation this would be the "default" feasibility mechanism based on the PriorityScheduler.
    Unfortunatelly I can not figure out the need of maintaining a feasibility set, since feasibility, as specified for the PriorityScheduler, is a simple asumption that we have "an adequatly fast machine to handle the periodic and sporadic load"?
    I actually assumed that feasibility analysis performs real cost budgeting taking into account deadlines and so on, but it seems to be specified simply to make a negative statement always when aperiodic tasks are involved ?The RTSJ scheduling framework provides support for feasibility analysis by defining the admission control methods eg setXXXIfFeasible. However the RTSJ does not, and can not, mandate any non-trivial feasibility algorithm because in simple terms no such general algorithms exist. There are some static feasibility tests in the literature and you could apply those offline to your application (assuming you can find the values of all the "magic" numbers in such formulae - which is generally not the case). At present the RTSJ doesn't support even these simple feasibility tests because blocking-time is not recorded in the release parameters - something being addressed in RTSJ 1.1. In any case unless there is a pluggable framework for feasibility tests it would be a waste of time for VMs to implement them given they can (more) easily be done offline using other tools.
    Only dynamic admission control is really of interest and as far as I am aware no such general dynamic admission control policies exist (anything you find in the literature is very context specific). So it is left up to an implementation as to whether they try and define dynamic admission control algorithms - and so far none have because they don't have one.
    In "Getting More Flexible Scheduling in the RTSJ" Wellings and Zerzelidis propose some (more or less) "minor" extensions to the RTSJ API in order to enable hierarchical scheduling within the fixed priority framework.
    Since Andy Wellings is a member of the RTSJ Technical Interpretation Committee, is there any attempt to evolve the specification in a similar direction as described above, in order to support more flexible scheduling mechanisms and feasibility analysis?If there is ever a RTSJ 2.0 then more sophisticated scheduling support is one of the items on the wishlist. But there's no guarantee there ever will be a RTSJ 2.0
    David Holmes

  • How do I sample a frequency or voice input using labview in real time?

    This is my first time using labview to sample real-time!  I'm trying to setup a microphone, so that when a voice is spoken to the mic I can sample the frequency, and
    further customize the data in labview. The final goal is to amplify the frequency and output it to a speaker. If possible via labview. My question is do I need some type of evaluation board , microcontroller, to sample the frequencies and save the data sample to labview. Also, how would I go by doing this in real-time, or when the person is speaking to the mic the voice/frequency can be modified in labview instantenaeously. If someone can give me a step-by-step way of performing this, please let me know.
    I currently use Labview 8.2 and 8.5

    There are many LabVIEW shipping examples that are built to acquire sound samples from a microphone attached to a sound card.  There is also an example on how to play a sound file directly to speakers, that you should definitely take a look at.  These examples VIs are found in the Sound folder which is under Hardware Input and Output in the NI Example Finder.  Depending on how advanced your frequency analysis will be, you might want to look into purchasing the Sound and Vibration Toolkit.  For your specific application, it doesn't sound like you will need the LabVIEW Real-Time Module, you should be able to obtain very accurate results without it.  Please view the article linked below for more information on this toolkit.  Thanks!
    Sound and Vibration Toolkit for LabVIEW
    Meghan M.

Maybe you are looking for