Wait on IRQ usage

I'm reading over the help section on this method and am wondering: can I execute my code without wiring out an asserted IRQ? In our other existing projects, we've been wiring T/F cases to the Timed Out output of this method.
To illustrate:
Would the following (located in the RT portion) implicitly execute on an asserted interrupt from my FPGA, or would I need to surround the FGV in a case based on the asserted IRQ or Timed Out portion?
Yes, I would test this, but I am not at the office to deploy to a chassis.

Everything you have to do is to maintain dataflow!
So as long as your FGV is in between the Wait on IRQ and the Acknowledge - in Terms of Dataflow - it will run after IRQ is fired.
The "Asserted IRQs" Terminal is used when you are waiting for more then one IRQ.
Christian

Similar Messages

  • Wait on Multiple IRQs

    If I wire an array of IRQs {0,1,2} to the "Wait on IRQ" Invoke method with a timeout of -1 does the method wait for all the IRQs or at least one?

    Hi Richard,
    It seems that the function will wait until it receives any one of the interrupts specified within the array.
    You can see which one is called from the IRQ asserted array output of the "wait on IRQ" method node.
    Best regards,
    Stephen C
    Applications Engineer

  • Serial I/O IRQ Method

    I am trying to do some simple I/O using an NI 9870 serial module in a cRIO with a NI-9014 host (LabVIEW 9, RIO 3.2.1) .  I am already using 3 DMA FIFO's for other things and was trying to pass the data in and out of the FPGA using an interrupt driven approach.  The baud rate is 57600 which if you do the calculations is ~60 characters in 15 milliseconds.  So here is what I tried and the RT Host immediately stops running after the timeout -  I realize there are all sorts of unhandled conditions in this code but it won't even eat one character....
    So here is the FPGA side - there is a pre-allocated array of 255 elements - the loop runs until the array is nearly full or a timeout occurs - at which point an interrupt is generated -
    Here is the host side - wait for an IRQ and then read the array, after that ack the IRQ -
    Am I missing something simple ?????

    Hello, is your wait function on the FPGA set to mSec or uSec or ticks? If its set to mSec then you will have your FPGA code wait 15 ms before even executing the second frame of the sequence so you will reach a timeout on the wait on IRQ every time (when set to 5 ms).
    Also, I suggest having a stop control wired to the Stop Terminal of your host loop rather than a true constant so that you can manually stop the loop and have the code complete  ( including closing the FPGA VI reference which I assume is sitting right outside your while loop. This Close FPGA Reference stops your FPGA code and cleans up the reference.
    National Instruments

  • IRQ from FPGA delay

    Hi.
    I have encoutered a problem in my code:
    I am using a cRIO 9012 with FPGA utilize a NI 9871 4-port RS-485 module.
    My code is using FPGA code and low-level communication VI's from this example:
    http://zone.ni.com/devzone/cda/epd/p/id/6166 
    In my application i communicate through the NI9871, using the FPGA. The communication seems to work, but i am experiencing some unexpected delays.
    The delay seems to appear at the "Wait on IRQ" invoke method in "901x 9871 DMA Read (Sub)" (see attached code). The FPGA code sets this IRQ when the current operation has completed. But i am experiencing unreasonable long delays here. (10+ mS).
    I tested it without waiting for the IRQ and this gave me a delay of 1 mS on every attempt. And still i got valid data.
    So it seems like the IRQ is held longer than necessarry.
    In my application time is critical. The length of the incoming telegrams is fixed.
    Thank you for your help.
    Attachments:
    code.zip ‏461 KB

    ok hereare the project files i changed the fpga vi a bit and attached a print screen of the new error occuring i just want to transfer simple data and recieve them to my shift register
    Attachments:
    delayprob.jpg ‏90 KB
    dmatry.zip ‏71 KB

  • Using FPGA program to collect data for long time without 'gap'

    Our data collection system has a NI-PCIe 7852R card. We want to collect data at up to 10 KHz for up to 10 to 30 minutes. The data amount is too large so we have to separated them into multiple Array and save in multiple files. We do not want to loose data during the saving time. Anyone has idea on how to do it? We would appreciate your help.
    My current method:
    1 using FIFO memory, in FPGA program loop , write data from AIs to FIFO memory, after certain amount of data is written, say 2000 of it, asks a IRQ.
    2. In Host program loop, wait for IRQ, once the IRQ comes, read in FIFO data, do some processing, put the data into prelocated array, once the prelocated array is full, save it to the disk.
    So far the method is not working.
    1st, there seems never a IRQ, the wait for IRQ (invoke method) never took any time.
    2nd, the FIFO reading in the host program loop seems takes no less time than the FPGA program write same amount of data into FIFO...
    SO I have extra questions:
    1. Does the 7852R cards and its new FPGA still support IRQ?
    2. What is the reading speed of FIFO read in Host program? I thought it should be much faster than the FPGA program writing it....
    Any help will be propitiated.
    Thanks a lot

    Here is the FPGA program block
    Part of Host program block inside the loop
    Part of Running results.
       1. Some data is missing in every FIFO reading in Host program
       2. The FIFO reading in Host program takes more time than the AIs=> FIFO in the FIFO program
       3. Of cause each host loop takes more time than it takes FPGA program to get same amount of data...
    Attachments:
    FPGA_AI_block.png ‏48 KB
    HOST_AI_inloop_block.png ‏31 KB
    Running result.png ‏124 KB

  • Introduce delay of 7 Days for each message of a particular type.

    I have a requirment that after receiving a message of a particular type, PI will hold the message for 7 days before processing, originally I thought a wait step in BPM maybe the appropriate solution but it isnt. (Memory usage/BPM blocking)....
    Im not sure what the best approach is to meet this requirment, other than a completely custom solution (custom table to store the message - with a background task that continues to process after 7 days).
    Ideally the solution will use the PI standard fucntionalty/tools to implement the solution, even using the file adapter to write the messages to the file system and then coming back and reporocessing in 7 days ....
    Im not sure ...
    Any input ?

    Ive had a bit of a think about this ... here is what Im thinking, given there is no defintative way of doing this correctly...
    1. Inbound File adapter reads inbound files.
    2. Transform and write the file to an outbound directory with a specific fileName mask "Day1_name.xml" based on a UDF.
    3. Create 7 File Adapters each running on seperate days 1 -> 7 picking up Files of a specific mask "Day1_*" (Day1 == monday and so on)
    4. Write the file to its ultimate destination and archieve the message when the correct file adapter is triggered.
    Advantages;
    1. No OS involvment - whilst writing a CRON job (shell script) to do this would be relatively simple it does add another point of failure.
    2. No excessive wait times/resource usage on PI
    3. In the event of a PI crash the file system will be intact where as thread.sleep(?????) will not be.
    Disadvantages;
    1. Given the possible downfalls of other solutions this seems to be best.
    Am I missing anything ...

  • Interrupt Request

    It is Noah Yang of Texas A&M studying under Dr. Zourntos for Senior Design class(ELEN405).  I have couple of questions about Interrupt Request(IRQ) as we embark working on the brain.
    The board we are given is a PCI-7831R, so we would have them in the PC to program and then we can have it startup running the program that we  develop when it gets power.  we can then provide the 5V to the board and it will work on the robot. We need some more specific way to do that.
    According to the FPGA Module Training Website at
    http://zone.ni.com/devzone/conceptd.nsf/2d17d611efb58b22862567a9006ffe76/62b388db80b5570286257037006...
    If you see the Lesson 6 Powerpoint slide, there is some explanation about generating a physical interrupt FROM FPGA TO THE HOST.
    Unfortunately, we will attach our brain(PCI-7831R) on the autonomous robot, not in the pc, but we would have it in the pc when programming, hence I don't think we will have Host VI for the window/pc unless we test the the brain of the robot by monitoring its operation on pc.
    We hope that MAIN PROGRAM(running to the beacon target) in the FPGA VI should Acknowledge the interrupt(obstacle detection, avoidance, jump back to the main program) from the FPGA VI, itself. And then, FPGA can wait for interrupt to be acknowledged. Lastly, IRQ is cleared by the FPGA.
    If that is impossible to have WAIT ON IRQ in the FPGA VI without a HOST VI, I wish to learn how to build our own code(from the scratch) acting like this kind of IRQ as an alternative.
    For the reference, our brief algorithm is followed.
    ==========================================================================
    MAIN PROGRAM:
      *rushing to the target(signaling 5kHz sound) based on Microphone sensor.
      *Interrupt 1 (When Obstacle is detected at 2 feet away)
      *Interrupt 2 (crash, when bumper sensor is rising edge triggered)
      *End the program when the robot gets to the target
    INTERRUPT 1:
       *stop
       *Select direction by comparison of two voltage from two Ultrasonic
        sensors
       *turn until both sensors have above-threshold voltage
       *go straight 2.5 feet
       *return to the begining of the MAIN PROGRAM
    INTERRUPT 2:
       *stop
       *step back by 1 feet
       *choose direction
       *return to the main program
    ==========================================================================
    Thanks.
    Noah
    Noah Haewoong Yang
    Electrical Engineering
    Dwight College of Engineering
    Texas A&M University
    801 Spring Loop 2506
    College Station, TX 77840
    Tel: 979-997-1145
    AIM: maroonsox12
    I am the proudest member of Fightin' Texas Aggie Class of 05. WHOOP!

    Hi Noah,
    Your best bet here sounds like a state machine based architecture, which will allow your FPGA VI to handle the various conditions internally, without need to generate an external interrupt.  IRQ's in the context of a LabVIEW program are sent from the FPGA to a Windows or Real Time host VI, where it is processed by the operating system and sent to the VI for processing.  If the only control hardware on your robot is the FPGA board, then you won't need to explicitly generate these interrupts since there would be no computer with an operating system to receive them.  Instead, if you have a state machine architecture (case structure inside a while loop, with a different case for each state and a shift register to store the next state), you can have the FPGA VI handle everything internally. 
    While we cannot develop your code for you, the basic structure for your algorithm could be as follows:
    Move state- move towards target and check sensors for obstacle.  If no obstacle, next state is move state.  If obstacle 2 feet away, next state is Obstacle.  If crash, next state is Crash.  If target reached, next state is end.
    Obstacle state- handle interrupt 1 code from algorithm (this will require multiple states).  When done, set next state back to the move state
    Crash state - handle interrupt 2 code (may require multiple states).  Next state is move state.
    End state - Done.
    Of course, this is a highly simplified example, but you can see the basic structure.  There are many examples of using a state machine architecture, including a design template that comes with LabVIEW that you can modify to suit your needs. 
    Cheers,
    Matt Pollock
    National Instruments

  • CRio 9014, 9116 and 9870 - serial write not working

    Hi,
    I am trying to read string from the text file and write to the serial port on 9870. The string are being read fine, they are getting in the Write_FIFO, the wait on IRQ passes all the bytes to the serial port. BUT I am struck, because there is nothing coming out of the serial port. I tried using scope to see if there is any signal, but there was not any. I have attached the FPGA and Host side of the program for any quick suggestions.
    Can some body please help? I will appreciate it a lot.
    Thanks,
    Ajay
    Solved!
    Go to Solution.
    Attachments:
    FPGA.VI.jpg ‏146 KB
    Host_1.jpg ‏111 KB
    Host_2.jpg ‏134 KB

    Hi,
    I talked to an NI help, and they suggested to try and work out the serial loopback example. I tried to implement that and at the end of the run, the execution drops no errors but i do see the timed out indicator going true for the read part of the program. I have tried everything i know of, I will appreciate any help. I have attached the FPGA and host programs if anybody wants to check it. The digital output channels were used just see if the sequence structure runs, and it does run fine.
    Thanks,
    Ajay
    Attachments:
    9870 Loopback DMA-FPGA.vi ‏81 KB
    Serial Loopback DMA-Host.vi ‏525 KB

  • Books For DB level Performance Analysis and fixing.

    Hi All, i want expert advice on below.
    Which book will be the best one to ,gain knowledge regarding different DB(My PROD is ORACLE 10g) level performance analysis?
    I have read books those are specific to SQL level performance analysis like (Trouble shooting Oracle performance by Christian Antogini,Effective Oracle by Design by thomas kyte, Cost based oracle fundamentals by Jonathan Lewis).
    Now i want something specific to OVERALL DB level tuning like DB wait events/cpu usage/memory usage/PGA sizing/DB parameters/performance views and scripts
    for digging more into issue etc..
    Please let me know.

    Hi;
    Similasr issue mention here many times, please use search mechanisim:
    https://forums.oracle.com/forums/search.jspa?threadID=&q=performance+database+and+book&objID=c18&dateRange=all&userID=&numResults=15
    Regard
    Helios

  • Massive Mountain Lion memory leak

    I will start describing the problem where I first discovered it.
    My early 2011 MBP had been asleep, and upon opening and waking it, it was incredibly slow. I opened activity monitor and couldn't believe my eyes.
    I have 8 GB of RAM, and all but 8 mb was in use. Around 6 GB was "inactive". I had no applications running besides Activity Monitor.
    I opened terminal and ran the purge command After a short wait, total memory usage was back to around 2 GB. Then right before my eyes, over approximately 30 seconds, the "inactive" memory grew until once again, I had about 8 mb of RAM free. This fluctuated a few mb, but nothing significant.
    After rebooting, I opened Activity monitor again, to watch ram usage. Usage increased to a little more than 2 GB. I then launched the App Store. Before putting my laptop to sleep earlier, I had been downloading a 10 GB update to Borderlands, but had paused the download, and quit the application before closing the laptop. I hit resume download, and went back to Activity Monitor. Memory usage seemed normal for several seconds, but shortly started increasing rapidly again. I imediately hit "pause download" in the App Store. But ram usage continued rising, so I quit the application. It kept rising, until my full 8 GB was in use.
    At this point I took a screenshot:
    The only thing I have left to tell you is that before upgrading to ML, I had previously attempted to download the same update, but hadn't had time to download the full 10 GB, so had cancelled the update. That was in Lion 10.7.4, with 4 GB of RAM, and I had no issues.

    Inactive memory is frequently used as an I/O cache.  While I do not know for sure, I think that a lot of the inactive memory may have data that still needs to be synced back to disk (as in disk writes that are cached, and still need to be written to disk).  When other processes need RAM, and inactive memory is used to satisfy the need, the app may nave to wait for slow disk I/O flushing inactive pages before the inactive memory can be given to the requesting app.
    Again, I do not know for a fact that Mac OS X keeps cached writes in inactive memory for extended periods of time, however, the slow performance when needing RAM and inactive is all that is available, and considering that laptops consume a lot of power keeping the rotating disk drive spinning, and there are some tasks that update logs on a regular bases that would tend to force a laptop to keep writting to disk, it seems to me that maybe Mac OS X addressed this situation by deferring writes to disk, and using inactive memory to cache it.
    The purge command seems to forces Mac OS X to move pages from inactive memory to the free list, and in doing that it would need to flush cached writes to disk, which is why it could take a long time for a purge to complete.
    All speculation on my part.
    As to Activity Monitor being your ONLY running process, you need to look at ALL PROCESSES in Activity Monitor, as Mac OS X ALWAYS has dozens of background apps running, and if one of them is running away doing lots of I/O, or maybe you have a backup running which will also do lots of I/O, all of which will tend to fill the inactive memory with cached I/O buffers, this could be the source of your growing inactive memory.

  • FIFO read that doesn't use 100% CPU

    The FIFO read looks like an event based node (like a dequeue or wait on occurance) and I think there's a lot of people that assume it's going to use minimal cpu resources while it is waiting for data. I'm wondering if we can have an option that behaved like that. For example, could we have fixed sized FIFO read where the FPGA could trigger an interupt to let the RT side know the data is ready?

    Hey igagne,
    I think this is a good idea. I get that timeouts implemented with polling are exactly the right thing for some applications, and quite problematic for others. I think having an option would also make the behavior more obvious to the user, which is a good thing. That being said, we do have a few programming practices that we recommend that can help if the polling behavior is not suitable for your application. Have you been able to find a solution that's suitable for your application? What hardware are you using?
    Along the lines of potential workarounds, I think you'll be interested in this knowledge base, http://digital.ni.com/public.nsf/allkb/583DDFF1829F51C1862575AA007AC792. A couple things I'd highlight, firstly some hardware does already work in the way you've described. Secondly, we generally recommend the example code shown at the bottom of the KB. In this way,your code has control over the polling rate and the tradeoff between a low latency response to data being available and consuming resources on the system. If you wanted an interrupt based solution to reduce time spent polling, it'd be possible to also add a wait on irq node and raise the irq from the FPGA when the data has been written. I'd still recommend polling for data after recieving the IRQ on the host because it's not generally guaranteed that the IRQ will arrive after all the data has made it's way to the host buffer, but it should be a much shorter poll.
    Thanks,
    Sebastian
     

  • MSI 6600GT Freezing

    Alright, so I came accross this site and decided to check out if there was anyone else having the same problems as me.  Much to my suprise, or maybe not, there were a lot of people with this problem or a similar one which I am about to explain.
    I built this computer a month ago:
    WindowsXP Pro Service pack2
    AMD64 3000+
    k8n-neo4 motherboard
    audigy 2 soundcard
    MSI6600GT pcie
    580 watt psu
    80gb hd
    and 2 drives
    11 fans total..
    Ok, so I hook up my computer and install bf1942 to test it out.  right when I start it up, BAM it freezes.  I restart, installing ut2k4 and testing that out, BAM, it freezes right on the menue screen like bf1942.  I go install all the necissary updates for my computer and install the newest patches for both games.  Yet, once again, my computer freezes at both menu screens.  I install XIII, I am able to play it for a few hours but then it freezes.  I install Prince of Persia and am also able to play it for a few hours but it too began to freeze soon after.  I have tried reinstalling windows, getting different gpu driver versions and even types (like omega and things like that, not just Nvidia drivers.)  But to no avail.
    I'm currently running the card under the 78.01 drivers by Nvidia (I know they're new and probably crap, but I just got them because nothing else worked...and ofcourse those didn't work either.) .  The gpu runs about 50c at idle, which is pretty hot, and then gets up to 56 or so in games.  Hot, but not enough to freeze...especially at menue screens.
    One interesting thing to note is that any demos I play will work.  Right now I play the bf2 demo and don't have any problems what so ever.  I also played a bunch of demos that came with the card including black hawk down and some other crappy game.  The point is, it seems the 3 demos I've played run fine...but all full version games crash.
    Any help would be much appreciated and sorry if this has been a topic I have only repeated once more upon a stack of the same.
    -Gannon

    Quote from: crull on 06-September-05, 11:38:50
    Those temperatures are well within reason and actually are pretty good...so I doubt it's heat related. Have you checked out your IRQ usage?
    One thing about BF1942 also...it will crash if your refresh rate is not 60Hz. The only way around that is to edit a text file in the game. I'm not sure if you know about that though.
    Have you tried running any of the 3DMARK tests like 2001, 2003...2005?
    It sounds like it could be a hardware conflict or IRQ conflict of somekind or even chipset drivers....something along those lines. I would suggest trying a a full run of 3Dmark first to see how the card handles it.
    Did you ever consider that maybe it's not the video card at all? To play those games you need memory and the CPU also...maybe the memory timings are set too tight or the CPU is overheating.
    crull, thanks for getting back to me. 
    First off I am running my monitor at 60hrz so that should be no problem in b1942.  no, I have not run the 3d marks on this machine yet...but I will.  Also, I have used both ram and cpu stress testers and pass both...my cpu does not overheat and niether my RAM. I'll get back to you when I run those tests...

  • How Reliable is Verizon 3G

    As a recent convert from land lines to all wireless, and most of the time the Verizon 3G I have here is outstanding (faster than my former DSL)!! But when it's bad, it's REALLY bad! I have excellent reception on my MIFI (the tower is within sight of my house) and my computer. When the 3G is bad, I have hours (12-24 hours at a time) where Verizon 3G service drops to BELOW 1kB/sec. I can't get any work done at that rate! Is it local? (I am on Hawai'i Island - the Big Island). This has been happening about 2-4x per month without any "sevrice interruption" announcements. Anyone experiencing similar problems with Verizon???

    I live in the hills of Vermont and have had a 3G broadband plan for about 6 months with the 5GB usage plan. I use a cell signal booster that consistently provides a solid two to three bars according to VZaccess Manager. The overall broadband service is sporadic in terms of speed and up time. For example there are frequent periods where the service just disappears (my cell phone will still show 3 to 4 bars while this is happening).
    However this issue is minor in comparison to the obvious speed throttling that takes place over the course of the day. It is almost like clockwork - speeds prior to 6 AM or so are up to ~0.5 Mbps and somewhere between 6 and 7 AM start deteriorating down to my dialup modem speed of 50kbps or slower. Of course there are days when the speeds may be higher - up to 150-200kbps. By Golly there are even a couple of daytime periods of max speed (0.5mbps) per month.
    After 7 or 8 PM the speeds slowly start climbing again so that by about 11 or 12 PM they may reach the 0.5mbps threshold.
    FYI my monthly usage is around 1GB, with one month at a whopping 2GB. There is no way that I have the time to wait for my usage to ever reach 5GB for a month. Looking at biting the bullet and cancelling this plan - does not live up to the hype in terms of speed, quality of service and consistency.

  • How reliable is Barracuda's Email Security Service?

    Hello All,We just purchased a Barracuda Spam and Virus Firewall 400. We have always had exchange hosted in house and knock on wood this has been rock solid for us. Part of the sales pitch that caught my eye was the Email security service which is included free of charge if you have one of their appliances/virtual appliances. The ability to store up to 4 days of emails and do pre-filtering is very appealing to me, however the reliance on a hosted solution so to speak has me nervous.Looking at their forums and on other sites I see complaints of it being down, delayed emails etc. Some of these complaints are older threads though. What I am curious to know is how it the level of service now. If you are using this setup or have used it in the past what was your experience? It's a feature I don't have to enable, change mx records etc, but if...
    This topic first appeared in the Spiceworks Community

    I live in the hills of Vermont and have had a 3G broadband plan for about 6 months with the 5GB usage plan. I use a cell signal booster that consistently provides a solid two to three bars according to VZaccess Manager. The overall broadband service is sporadic in terms of speed and up time. For example there are frequent periods where the service just disappears (my cell phone will still show 3 to 4 bars while this is happening).
    However this issue is minor in comparison to the obvious speed throttling that takes place over the course of the day. It is almost like clockwork - speeds prior to 6 AM or so are up to ~0.5 Mbps and somewhere between 6 and 7 AM start deteriorating down to my dialup modem speed of 50kbps or slower. Of course there are days when the speeds may be higher - up to 150-200kbps. By Golly there are even a couple of daytime periods of max speed (0.5mbps) per month.
    After 7 or 8 PM the speeds slowly start climbing again so that by about 11 or 12 PM they may reach the 0.5mbps threshold.
    FYI my monthly usage is around 1GB, with one month at a whopping 2GB. There is no way that I have the time to wait for my usage to ever reach 5GB for a month. Looking at biting the bullet and cancelling this plan - does not live up to the hype in terms of speed, quality of service and consistency.

  • Problems Getting CS6 Apps To Launch After Install

    I have recently installed Adobe CS6 Design and Web Premium on my computer and am not having any luck getting any of the applications to run. I have uninstalled and reinstalled twice already, just to see if that would make a difference. When I run Flash Professional, I get an error stating that "The application failed to initialize properly (0xc000001d)". Attempting to launch Photoshop results in "Adobe Photoshop CS6 has encountered a problem". Attempting to launch Dreamweaver doesn't do anything.
    I'm running Windows XP SP3 with 3GB of RAM and plenty of hard disk space to spare. Does anyone have any ideas on additional troubleshooting steps that I can take?

    Thanks for your feedback. I do realize that my computer is underpowered, and would not put down the money for CS6, knowing that. However, I happened to win a copy of it and thought I would try to get what use out of it I could. It could turn out that I will just have to wait to get usage out of it the next time that I replace my computer.
    Unfortunately, the details in event viewer are pretty sparse, but here is the example for Flash Professional:
    Event Type:
    Information
    Event Source:
    Application Popup
    Event Category:
    None
    Event ID:
    26
    Date:
    1/27/2013
    Time:
    10:31:27 AM
    User:
    N/A
    Computer:
    DESKTOP
    Description:
    Application popup: Flash.exe - Application Error : The application failed to initialize properly (0xc000001d). Click on OK to terminate the application.
    For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

Maybe you are looking for

  • How can I restore an old iPhone backup from Time Machine?

    Hi there, I lost my iPhone 4S but have a extra iPhone 4 (used for travel) that I would like to restore the 4S backup to. The problem is that the backup of my 4S has somehow disappeared from itunes and the only one that shows up is the iPhone 4 backup

  • Using a VB Class from a Java Web Application

    Hi, I need to instance a VB class, and then to invoke their methods, from a java web apllication. The code is: Set object = CreateObject("AS2.OBJECTS") If object.PutName(sCadenaError, sEntidad) Then Array1 = object.getData(sCadenaError, iEntorno, Fal

  • Adding more than one Billing admins

    Hi All, is it possible to add more than one billing admins in a subscription? Any idea.

  • Unable to set bounding box using rectangle tool on small rectangles

    I just upgraded to InDesign CC and keep getting an error message "Unable to set bounding box" when I try to create a small rectangle. The tool works fine when creating a larger rectangle -- it only gives me the error when creating a very small rectan

  • Itunes doesn't want bought song to be copied on my Ipod mini

    Hi, My Ipod mini (latest model) worked fine till few days ago when Itunes deleted from my Ipod all the the songs I bought on the apple music store. Itunes showed a popup telling me that not all my songs were uploaded onto my Ipod because I didn't hav