Semaphores in LV 5.1 ?? across different processes??

Urgent!
I have a very large and complex application, which I need to build into an executable.
I need to have data from one or more files, which this exe writes periodically, available to another separate application.
I need to protect the file from read during write and
vice-versa, so was thinking of semaphores which I'm using inside the large app.
So, I ran a trial.
( OS is NT4 and NT 5)
Created two different (but identical except for vi name) VIs which each create, acquire, release, and destroy a semaphore of the same name.
(the name comes from a global to ensure it is in fact the same.)
They both work ok.
One waits for the other to release the semaphore before proceding.
Then I built an exe from
each with app builder.
Each of the exe's can create, acquire and destroy its
own semaphore without affecting that of the other app.
(Even though the semaphores are named the same.)
All the info I can find on the LV semaphores indicate that they are just like mutexes except that they can be shared by more than one task (set by "size" input of the create semaphore vi).
According to microsoft documentation, Mutexes and semaphores are kernel objects which can be used to synchronize across multiple processes(, threads, or tasks).
So what gives? Is the implementation in LV different
than the Windows kernel?
Anyone have any ideas?
Thanks
Dave

Semaphores are similar in behavior to mutexes but are not system mutexes;
they rely on
LabVIEW occurences which are local to the application.
To solve your problem you could open files denying read/write access to
other. Then if a file is already in use, other applications will receive
access error when attempting to open the file.
HTH
Jean-Pierre Drolet
"Dave Karon" a ecrit dans le message de news:
[email protected]..
> Urgent!
> I have a very large and complex application, which I need to build
> into an executable.
> I need to have data from one or more files, which this exe writes
> periodically, available to another separate application.
> I need to protect the file from read during write and
> vi
ce-versa, so was thinking of semaphores which I'm using inside the
> large app.
> So, I ran a trial.
> ( OS is NT4 and NT 5)
> Created two different (but identical except for vi name) VIs which
> each create, acquire, release, and destroy a semaphore of the same
> name.
> (the name comes from a global to ensure it is in fact the same.)
> They both work ok.
> One waits for the other to release the semaphore before proceding.
> Then I built an exe from each with app builder.
> Each of the exe's can create, acquire and destroy its
> own semaphore without affecting that of the other app.
> (Even though the semaphores are named the same.)
> All the info I can find on the LV semaphores indicate that they are
> just like mutexes except that they can be shared by more than one task
> (set by "size" input of the create semaphore vi).
> According to microsoft documentation, Mutexes and semaphores are
> kernel objects which can be used to synchronize across multiple
> processes(, threads, or task
s).
> So what gives? Is the implementation in LV different
> than the Windows kernel?
> Anyone have any ideas?
>
> Thanks
> Dave
LabVIEW, C'est LabVIEW

Similar Messages

  • Reusing Human Tasks across BPM Processes with different Data Objects

    Hi
    JDeveloper 11.1.1.6, WLS 10.3.6, SOA/BPM 11.1.1.6
    I have defined 2 BPM Processes, P1 and P2, which has 2 different Process Data Objects O1 and O2.
    But I am reusing the same Human Task in both the processes.
    For the Human Task to support the activities in both the processes, I have added O1 and O2 Data Objects in the Human Task Definition.
    And hence the ADF Taskflow / page generated out of the Human Task will have both the Data Objects O1 and O2 as payload objects in the Page.
    When an instance of Process P1 is created, the payload O1 will have values, but O2 will be null.
    And when an instance of Process P2 is created, the payload O2 will have values, but O1 will be null.
    It works well like this, but I am a bit concerned about the performance from BPM Process and also ADF page.
    In ADF page, let us say, somehow I can control the rendering of the attributes or creation of the iterator bindings based on identifying which process is being executed.
    (By setting the iterator binding refresh property in page definition)
    In this way attribute bindings for O1 will not be created when P2 is being executed.
    But still when the Process instance is created, and when we see the payload structure of the human task in the EM - Audit Trail,
    I still see both data objects O1 and O2 are created in the Payload, but O1 will have nulls in process P2.
    So my question is, from performance point of view, is it advisable to define different Data Objects in reusable Human tasks ?
    Or should I have to define a separate Human Task definition which contains only the Data Objects related to the process it is being called from ?
    Thanks for any help
    Sameer 

    Martijn,
    You are correct in your assessment that the JAG in the current JHeadstart release cannot cope with multiple bc4j packages. There is no work around for this. Upgrading to 9051 will not help.
    In the JHeadstart-ADF release, this restriction has been lifted. You can place your EO, VO and AM objects in different packages, and you can group them in a separate project (Model project), while generating your JHeadstart application in a ViewController project.
    We have a number of customers that use the latest JHeadstart-ADF builds to build their ADF apps. If you are interested in joining this beta program, please send an e-mail to [email protected]
    Steven Davelaar,
    JHeadstart Team.

  • Retrofit in SAP Solution Manager 7.1 Supported Across Different OS's?

    Is Solution Manager 7.1 Retrofit Capability supported with a different enhancement pack level and across different Operating Systems (HP-UX & AIX)?
    The specific scenario would be setup of a dual landscape where traditional Dev -> QAS-> PRD is on HP UX 11.31. Then we would create a parallel 'Upgrade Project" landscape on IBM AIX 7.1 uDEV -> uQA (on AIX), perform an enhancement pack upgrade on uDeV and uQA and use Solution Manager Retrofit Capability to synchronize transports between the two landscapes.
    Thanks in Advance!
    -Patrick

    Hi we are configuring Google SMTP getting below error..
    No delivery to xxx.com, authentication required
    Message no. XS856
    Diagnosis
    The message was processed successfully in the SAP system. The mail server that is to receive the message for further processing requires authentication. Probably there is no logon data specified in the SAPconnect configuration.
    Information from external system (if available)
    smtp.gmail.com:587
    530 5.7.0 Must issue a STARTTLS command first. i91sm11178241qgd.25 - gsmtp
    Procedure
    Enter the logon data in the SAPconnect node.
    Using Gmail SMTP server using "smtp.gmail.com" with port 587
    Please advise.
    Regards,
    Sudarshan

  • Can BPM maintain flow across different applications

    Hello,
    I have a requirement where I have to maintain the business flow across different applications(Siebel CRM, Oracle Financials and third party applications) with out the end user knowing.
    Is it possible with BPM to navigate users from one application to another application (CRM Application-> Third party Application -> Financials)? If there is a solution availabe with BPM or a different application please provide the same doc. Appreciate your help.
    Regards,
    Jay

    Hi,
    Yes. Oracle BPM can maintain a flow across multiple applications without the end user knowing. It is something it was built to do.
    First, applications like the ones you mentioned have an API (typically web service today but older applications exposed their API as Java POJOs, EJBs, COM, etc.). For Oracle BPM to access the applications, you need to expose the API in Oracle BPM's catalog. Customers that have a service bus expose the application APIs in the service bus and then Oracle BPM catalogs the service bus proxy services. Customers that do not have a service bus can expose the application APIs directly in Oracle BPM's catalog. Either way will work.
    Second, you'd design a process with a series of Interactive (human activities) and Automatic (activities that invoke the components that in turn invoke the APIs for your applications without human intervention). You'd add something called instance variables that carry the information throughout the life the process for each work item instance. Interactive activities are placed into roles with a name associated with them (e.g. CSR or Manager) so the work done in each activity is done by the right type of person. Interactive activities can be set up where the work item instance goes to a specific person instead of everyone in the role where the activity is located (e.g. send the instance to the CSR that talked to the customer last time).
    Third, at runtime as each work item instance is created (e.g. "Order 227") in the process the work item instance flows to one of the process's Interactive or Automatic activities. If it flows into an Interactive (human) activity, the end user assigned to the role where the activity is located clicks on an item in their web based Oracle BPM WorkSpace's inbox for the specific work item instance that they are interested in working on (again - perhaps "Order 227"). Once clicked by the end user, a UI presentation (either built using Oracle BPM's WYSIWYG presentation editor or a JSP) shows the work that needs to be done specifically by that end user. The UI presentation is already populated with the information gathered from a database or a previous API call from an Automatic activity. All this is done without end users having to cut out of one application and then paste into another application's screen - the right contextual information is sent to the right person at the right time. Once the end user finishes their manual task, the work might flow to an Automatic task that invokes another applicaiton's API automatically from the logic and variable information gathered in earlier activities in the process.
    All this is done without the end users knowing that they are flowing through multiple applications to get their work done.
    Hope this helps,
    Dan

  • LWAPP to CAPWAP across different software versions. ?

    Hi guys,
    I was wondering if someone could clarify what happens when you have controllers live on two different software versions..
    We have 4 controllers.. 2 on version 4.2, which APs currently register to with LWAPP and 2 new ones on version 7..
    On WCS, I have moved an AP currrently registered on a version 4.2 controller and reassigned it to the newer controller ( ver 7 ).
    The AP then reboots and I can see in the event logs that it comes back up on LWAPP after resolving the entry CISCO-LWAPP-CONTROLLER.
    On WCS, the AP shows as registered back on the old Controller ( ver 4.2 ) but the Primary and Secondary Controllers are still pointing to newer ones ( ver7 )..
    According to the Cisco document, the transfer from LWAPP to CAPWAP should be seemless, so is this a problem migrating across different software versions. ?   Will the AP sort itself out eventually.
    If I console into the AP and statically assign the new controller IP address, this forces the AP to download the newer software version ( after multiple auto reboots ) and it comes up on CAPWAP.. This shows that the path between the AP and controller is ok..
    Ideally, Im hoping that I can simply reassign our current APs to the new controller ( and onto CAPWAP ) rather than having to console into all our remote APs and statically reconfigure them.. Not fun !!.
    Any advice would be appreciated..
    Thanks.
    Jon.

    Hi Nicolas,
    Thanks for the reply..
    Yes I agree that, once on version 7 it will perform a lookup for " cisco-capwap-controller ".. I forgot to say in my original post that I had already check this and it does resolve correctly to the two newer controller IPs.. If I console into the AP and manually define the newer controller IP address, it all goes through sucessfully and I can see the discovery and correct registration process in the logs
    So we know that it kind of works but Im hoping that there is a simplier solution from within WCS where I can do all this remotely, rather than consoling into every AP locally.
    Your suggestion of inputting the new controller IPs in WCS instead of the names sounds good however, it needs to be on version 7 first to be able to do this ? As you say, there is a field for this in version 4.2.... Looks like there is a bit of chicken and egg going on here !!.
    Any other ideas or thoughts would be appreciated.
    Many thanks.
    Jon.

  • How to manage ApplicationDomain for loaded SWFs across different domains?

    I've been getting this following error -- when I'm loading a subsidiary SWF into a main one. The sub swf contains the overlays. OverlayOne is a subclass of Overlay.
    TypeError: Error #1034: Type Coercion failed: cannot convert OverlayOne@18684f89 to Overlay.
         at HSRawVideoPlayer/setCurrentOverLay()
         at HSRawVideoPlayer/showOverlay()
         at HSRawVideoPlayer/dotRoll()
    I googled and found that I should probably be setting the applicationDomain of the loader context of the loaded swf to be that of the loading SWF (as per Senocular's article on the subject) -- although I thought that in cases of conflict this would resolve to the loading SWFs ApplicationDomain, so not necessary.
    But I've also read that this won't work across different domains, and that's the situation here -- the client wants the urls of loading and loaded swf's to be fully qualified . Will setting the ApplicationDomain of the loaded SWF to be that of the parent solve the problem above, even if they are in different domains? Can someone show me a short code snippet? Thanks!

    Hi,
    DSS has inbuilt functionalities to compare the transactions against the in built rules.If the transactions take place not in accordance with the in-built rules,it is treated as a "violation" and is reporetd.
    Virsa is an example of DSS tool.Here you can build rules for access and process ;constantly compare the actuals Vs the rules;report the violations.
    In SAP R3 for example,the T/code:pfcg is tailored for access control,while the invoice parking [f-63] is tailored for process control.Using VIRSA,you can address to risks involved both,namely,access and process control.This is an example of how DSS can help in Risk integration.
    In these tools,we have an Engine for building the rules-based on this we build the rules.These rules are stored in a table.when a transaction-for which we have built a rule - takes place,the system compares the rules VS actuals.The inconsistencies if any are reported as violations.
    Hope this helps.
    Regards,
    Ramesh

  • How do I fix colour picker to work across different colour-managed monitors?

    Hey everyone!
    I'm assuming this problem I'm having stems from having colour-calibrated monitors, but let me know if I'm wrong!
    To preface, this is the setup I have:
    Windows 7
    3 monitors as follows, all have individual colour profiles calibrated using the Spyder 3
    Cintiq 12WX
    Dell U2410
    Dell 2409WFP
    Photoshop CS6 - Proofed with Monitor RGB, and tested with colour-managed and non-colour-managed documents
    I usually do most of my work on the Cintiq 12WX, but pull the photoshop window to my main monitor to do large previews and some corrections. I noticed that the colour picker wouldn't pick colours consistently depending on the monitor the Photoshop window is on.
    Here are some video examples:
    This is how the colour picker works on my Dell U2410: http://screencast.com/t/lVevxk5Ihk
    This is how it works on my Cintiq 12WX: http://screencast.com/t/tdREx4Xyhw9
    Main Question
    I know the Cintiq's video capture makes the picture look more saturated than the Dell's, but it actually looks fine physically, which is okay. But notice how the Cintiq's colour picker doesn't pick a matching colour. It was actually happening the opposite way for a while (Dell was off, Cintiq was fine), but it magically swapped while I was trying to figure out what was going on. Anyone know what's going on, and how I might fix it?
    Thanks for *any* help!
    Semi-related Question regarding Colour Management
    Colour management has always been the elephant-in-the-room for me when I first tried to calibrate my monitors with a Spyder colourimeter years ago. My monitors looked great, but Photoshop's colours became unpredictable and I decided to abandon the idea of calibrating my monitors for years until recently. I decided to give it another chance and follow some tutorials and articles in an attempt to keep my colours consistent across Photoshop and web browsers, at least. I've been proofing against monitor colour  and exporting for web without an attached profile to keep pictures looking good on web browsers. However, pictures exported as such will look horrible when uploaded to Facebook. Uploading pictures with an attached colour profile makes it look good on Facebook. This has forced me to export 2 versions of a picture, one with an attached colour profile and one without, each time I want to share it across different platform. Is there no way to fix this issue?
    Pictures viewed in Windows Photo Viewer are also off-colour, but I think that's because it's not colour managed... but that's a lesser concern.

    I think I've figured out the colour management stuff in the secondary question, but the weird eyedropper issue is still happening. Could just be a quirk from working on things across multiple monitors, but I'm hoping someone might know if this is a bug/artifact.
    Going to lay out what I inferred from my experiments regarding colour management in case other noobs like me run into the same frustrations as I did. Feel free to correct me if I'm wrong - the following are all based on observation.
    General Explanation
    A major source of my problems stem from my erroneous assumption that all browsers will use sRGB when rendering images. Apparently, most popular browsers today are colour-managed, and will use an image's embedded colour profile if it exists, and the monitor's colour profile if it doesn't. This was all well and good before I calibrated my monitors, because the profile attached to them by default were either sRGB or a monitor default that's close to it. While you can never guarantee consistency on other people's monitors, you can catch most cases by embedding a colour profile - even if it is sRGB. This forces colour-managed browsers to use sRGB to render your image, while non-colour-managed browsers will simply default to sRGB. sRGB seems to be the profile used by Windows Photo Viewer too, so images saved in other wider gamut colour spaces will look relatively drab when viewed in WPV versus a colour-managed browser.
    Another key to figuring all this out was understanding how Profile Assignment and Conversion work, and the somewhat-related soft-proofing feature. Under Edit, you are given the option to either assign a colour profile to the image, or convert the image to another colour profile. Converting an image to a colour profile will replace the colour profile and perform colour compensations so that the image will look as physically close to the original as possible. Assigning a profile only replaces the colour profile but performs no compensations. The latter is simulated when soft-proofing (View > Proof Colors or ctrl/cmd-Y). I had followed bad advice and made the mistake of setting up my proofing to Monitor Color because this made images edited in Photoshop look identical when the same image is viewed in the browser, which was rendering my images with the Monitor's colour profile, which in turn stemmed from yet another bad advice I got against embedding profiles .  This should formally answer Lundberg's bewilderment over my mention of soft-proofing against Monitor Colour.
    Conclusion and Typical Workflow (aka TL;DR)
    To begin, these are the settings I use:
    Color Settings: I leave it default at North American General Purpose 2, but probably switch from sRGB to AdobeRGB or  ProPhoto RGB so I can play in a wider gamut.
    Proof Setup: I don't really care about this anymore because I do not soft-proof (ctrl/cmd-Y) in this new workflow.
    Let's assume that I have a bunch of photographs I want to post online. RAWs usually come down in the AdobeRGB colour space - a nice, wide gamut that I'll keep while editing. Once I've made my edits, I save the source PSD to prep for export for web.
    To export to web, I first Convert to the sRGB profile by going to Edit > Convert to Profile. I select sRGB as the destination space, and change the Intent to either Perceptual or Relative Colorimetric, depending on what looks best to me. This will convert the image to the sRGB colour space while trying to keep the colours as close to the original as possible, although some shift may occur to compensate for the narrower gamut. Next, go to Save for Web. The settings you'll use:
    Embed Color Profile CHECKED
    Convert to sRGB UNCHECKED (really doesn't matter since you're already in the sRGB colour space)
    and Preview set to Internet Standard RGB (this is of no consequence - but it will give a preview of what the image will look like in the sRGB space)
    That's it! While there might be a slight shift in colour when you converted from AdobeRGB to sRGB, everything from then on should stay consistent from Photoshop to the browser
    Edit: Of course, if you'd like people to view your photos in glorious wide gamut in their colour-managed browsers, you can skip the conversion to sRGB and keep them in AdobeRGB. When Saving for Web, simply remember to Embed the Color Profile, DO NOT convert to sRGB, and set Preview to "Use Document Profile" to see what the image would look like when drawn with the embedded color profile

  • Can use the same thread safe variable in the different processes?

    Hello,
    Can  use the same thread safe variable in the different processes?  my application has a log file used to record some event, the log file will be accessed by the different process, is there any synchronous method to access the log file with CVI ?
    David

    Limiting concurrent access to shared resources can be better obtained by using locks: once created, the lock can be get by one requester at a time by calling CmtGetLock, the other being blocked in the same call until the lock is free. If you do not want to lock a process you can use CmtTryToGtLock instead.
    Don't forget to discard locks when you have finished using them or at program end.
    Alternatively you can PostDeferredCall a unique function (executed in the main thread) to write the log passing the apprpriate data to it.
    Proud to use LW/CVI from 3.1 on.
    My contributions to the Developer Zone Community
    If I have helped you, why not giving me a kudos?

  • In what circumstances could my app service simultaneously process requests in different processes?

    Hi,
    We recently encountered a scenario in our Cloud Services Web Role web app where simultaneous api calls on the same item occurred, such that IIS automatically allocated them to be handled on separate threads inside the same w3wp process. We needed to handle
    this scenario by rejecting/ignoring whichever call came in second, so we just used a static ConcurrentDictionary object to track which items were currently "locked".
    But I'm concerned that as we scale up eventually Azure will start routing such calls to different w3wp processes or even different machines.  In such a case, a static in-memory object is obviously not going to do the job.  For now all I want to
    do is find out what happens in the scenario where two requests are handled by different processes/machines - so what's the easiest way to trigger an Azure web role into this behaviour?
    Thanks
    Dylan

    I've confirmed via MS support that my assumptions were basically correct - if you have Instances = 1 then all requests for a single web application will be handled by one instance of the relevant w3wp process.
    If Instances > 1 then they might be handled by w3wp processes on separate virtual machines. There's no known scenario where there would be multiple w3wp processes on one machine, so if you need to handle concurrency for a scalable web-app, you have to
    use some central resource that is available to all instances.

  • Material for a single project spread across different locations

    Dear Friends
    my client is executinig trun key projects. One single project may be spread across different geographical locations. Say Project name is "PRJ001".
    PRJ001 will be executed in Bombay, Hyderabad, Chennai.
    There are 2 scenarios for procuring materail:
    1. Since my these places are quite far away, I might procure material from a venodr near to these locations.
    2. I might give PO to one single vendor to dispatch material to these different locations for project execution.
    In the both the cases, how to handle material?
    What will be best option? Should I create a storage location (my client stores material @ site as these projects run for years)?
    I'm procurial material as Project Stock (Q).
    Say Bomaby location needs 500 no. of material, Hyderabad location needs 700 no. of material, Chennai location needs 300 no. of material.
    Now how do I ensure that the right material with right quantity is reaching respective project site?
    In some cases, project runs in remote location. Where there won't be any connectivity/ access to system. In such cases, if the site engineer enters GR/ IR & activity confirmation in excel sheet & later on sends an e-mail with this excel sheet to the office. How can we upload it to the system so that it updates the required fields in the system?
    Please give your suggestions.
    I appreciate your support/suggestion .
    Thanks

    Hi Amaresh,
    I think your Option no. 01 holds good for your requirement. You can define the corresponding Project sites in Chennai, Mumbai and other places as Storage locations. Better define Seperate storage locations for different site locations.
    I think having a delivery schedule with the specific requirement quantities and the storage location should resolve your issue of handling different quantities for the same material. This you can discuss & sort it out with your MM consultant.
    Hope this gives some idea.
    Regards,
    L.Balachandar

  • Creating pocess intance of a different process and passing arguments

    Creating Process instance of a different process:
    I have two different process: Main_Flow (id: MainFlow) and Second_Flow (id: SecondFlow). In the first process I am reading a csv file. Each line of the file has four columns. After reading each line I have to initiate Second_Flow and pass the read data from the file. (Pls find the code below for the whole process):
         fileReader = FileReader(arg1 : fullFileName);//filename is of file type and have file name and path
         Java.Io.BufferedReader reader = BufferedReader(arg1 : fileReader);
         String str;
         int countLines = 0;
         while ((str = reader.readLine()) != null)
              strColumn = str.split(delim : ",");
                   int ColumnCnt = 0;
                   while (ColumnCnt < 4)
                        //defining variables
                        String appNo;
                        String custNo;
                        String loanAmm;
                        String loanDate;
                        //logMessage("Value at Column: " + ColumnCnt + " is " + strColumn[ColumnCnt]);
                        if (ColumnCnt == 0)
                             arrLoanData["appNo"] = strColumn[ColumnCnt];
                        else if (ColumnCnt == 1)
                             arrLoanData["custNo"] = strColumn[ColumnCnt];
                        else if (ColumnCnt == 2)
                             arrLoanData["loanAmm"] = strColumn[ColumnCnt];
                        else if (ColumnCnt == 3)
                             arrLoanData["loanDate"] = strColumn[ColumnCnt];
                        arrLoanData["descriptionArg"] = "AutoInstance: " + formatTime('now', timeStyle : Time.SHORT);
                        arrLoanData["genByArg"] = "Automatic";
                        ProcessInstance.create(processId : "/SecondFlow", arguments : arrLoanData, argumentsSetName : "BeginIn");
                        ColumnCnt = ColumnCnt + 1;
              countLines = countLines + 1;     
    (“The code is in Java and not in PBL”)
    I have to pass appNo, custNo, loanAmm and loanDate as the arguments. The Argument will be of Any[String] type. The argument set name of Second_Flow is “BeginIn”. But I am not getting anything in Second_Flow.
    What can I do in the argument mapping of begin of Second_Flow to get the passed argument (array)?

    the argument 'arguments' for the method ProcessInstance.create receives a map of the arguments that the 'argumentSetName' argument set will receive.
    so for example if your second flow has 2 arguments, String name, Decimal value and String[] content your method invocation would be:
    ProcessInstance.create(processId : "/SecondFlow", arguments : {"name": strNameFromCsv, "value": valueFromCsv, "content": ["a","b","c","d"]}, argumentsSetName : "BeginIn");

  • Is there a different process to upload video podcast or is it the same?

    Is there a different process to upload video podcast or is it the same process?

    Exactly the same. The only difference lies in the code within the 'enclosure' tag which needs to match the file type, and most podcast creation programs and service will handle that for you. However you need to be aware that you are limited to .m4v, .mov or .mp4 files (not Windows Media, RealMedia or Flash).
    You may find my 'get you started' page on podcasting helpful:
    http://rfwilmut.net/pc

  • How to maintain same width of columns across different table sections in OBIEE reprot

    I have a prompt and report (analysis) on the dashboard. In the report, there are tables sections.
    The problem is when I run the report, the column widths vary across different section of table. This report also showing totals too.
    The report needs to be fit so there is no horizontal scroll bar in the report.
    I have already given the same width in additional properties of column values, header and folder and in total's property in criteria and also in the result tab.
    I am new to OBIEE, so don't know how to fix it. any help would be appreciated.
    thanks in advance.

    You might want to post to the OBIEE discussion area
    Business Intelligence Suite Enterprise Edition

  • Is it possible to cluster appliances across different subnets?

    We are attempting to cluster two appliances across different subnets in order to provide greater survivability. Although we were able to cluster the appliances, the manageability of the appliances has become somewhat impaired. We've opened ports 443, 22 and 2222 between the two appliances. The appliances are C350s running AsyncOS 7.1.3-010. Are we missing something?
    Thanks,
    Rob

    Rob,
    Are these appliances communicating using IP addresses? If yes, in order to a join cluster,using IP addresses there must be a reverse DNS  (PTR) record configured in DNS server for the Cisco IronPort appliance.Please check that if the the reverse lookup works. If not, it might be another issue.
    Regards,
    Jyothi Gandla
    Customer Support Engineer

  • Non-Reentr​ant VI executing in different Process Stacks

    I am creating a DLL which can be called by several different processes on a machine at the same time. If I have a non-reentrant VI in the API, do the same restrictions apply between processes as if they were in the same process? For example, if I call "Get Shorty.vi" which is a non-reentrant VI from 2 places in the same process, at the same time, the first call will start executing, and the second call will wait till the first call is done. When called from 2 different process stacks, there is supposed to be total seperation of memory windows, so logic tells me that this restriction would not apply if that same VI were called from 2 different processes.
    The catch is that the VI Library installed by the run-time engine is used by both processes as well. Would this, in effect, cause the same behavior between 2 calls to the same VI both in process and out?
    Thanks for any help you can provide. This is not a easy to describe concept, so feel free to ask for clarification if needed.
    CyberTazer
    Software Systems Engineer

    I understand that the memory spaces are supposed to be completly seperate. The question is that if 2 processes call the same function in the same dll at the same time... will one process have to wait on the other process to finish before being allowed to execute? My feeling on this is no, it would not have to wait, but I am hoping that someone out there has a more difinitive answer.
    There is no shared memory mechanisms set up in this vi.. ie for persistant data between execution processes. I am not even sure if this is possible with LV dll's, but I thought I would throw that in there since I know it is possible in general.
    Thanks again for your help guys... keep em comming. Hopefully this will help clear up other peoples perceptions on how dll execution is handled in LV if we can nail this down a bit more.
    CyberTazer
    Software Systems Engineer

Maybe you are looking for

  • Can't see messages on sxi_monitor for sync scenario?

    Hi gurus, We are using PI 7.11. We found after no operation for ESR or ID for a long time, the system pops up the error "no authorization...", then we have to click esr on pi main page and logon again. Is there any parameter to change the user sessio

  • How does wifi sync work?

    Hi.  From what I have read, iTunes will automatically sync with iOS5 devices if they are plugged in to electricity (and if you have set up your device to sync over wifi, and if iTunes and the device are on the same wifi network).  But when does the a

  • Ipod connection to 2014  Toyota Camry

    Having issues connecting Ipod touch (1st gen) to 2014 camry via usb.  supposed to be compatible, says "ipod authorization unsucessful" , any ideas?

  • Regarding faulty items

    Hi, can any one help me out how to skip faulty items while creating PO Using PR data. I am using bapi  BAPI_PO_CREATE1. i need to eliminate faulty items before calling the BAPI How to identify the PR contains a faulty item Pls help me out on this iss

  • Excise condition Record Problem

    I have Created one excise Record for one tax code (FOR EX :M0) ( T-Code :FV11, key combination : Plant  / Vendor / Material ). Then i went for creating po, If i take that tax code ( M0) only, system has to be those details right.... But it's not happ