Using both CPU cores on Zynq?

Hi,
Has anyone tried to use both cpu cores of zynq?
I did not find any example from Xilinx, only related information
in "Zynq-7000 EPP Technical Reference Manual UG585 (v1.2)".
It says that cpu0 must do these two things:
1. Write the address of the application for CPU1 to 0xFFFFFFF0.
2. Execute the SEV instruction to cause CPU1 to wake up and
jump to the application.
Found a "dual cpu demo" example on ARM web-site:
http://www.arm.com/files/pdf/ZC702_DS5_2.pdf
However, the attached source code doesn't do any of the
above two actions. How can the second cpu start then?
Regards,
Pramod Ranade
 

Thank you j, for the suggestions.
I was basically trying with baremetal/baremetal.
First I used "hello world" example for CPU0, it
works fine. Then I added another "hello world",
but now for CPU1. Then I edited it's main() and
removed the initialization part and also the
printf("hello world"). Instead, I simply declare
a pointer to some memory location. And in an
infinite loop, I keep incrementing the value
pointed by that pointer.
The "hello world0" on the other hand has been
modified to write start address of the "hello world1"
to location 0xFFFFFFF0 and then a simple "SEV".
Then the program reads the same memory
location (which is being incremented by CPU1)
and prints it's value to UART. I was expecting that
the value will be seen increasing...but I see it
constant.
For the shared memory location, I tried 0x00700000
which is in DDR RAM. I also tried 0xFFFFE000 which
is in OCM. Neither worked...
I suspect, CPU0 needs to something more than just
writing CPU1 start address to 0xFFFFFFF0 and then
issuing a SEV. But I could not find any more information.
Nor can I find any example that uses both CPU cores.
Regards,
Pramod Ranade
 

Similar Messages

  • Does SL really utilize both CPU cores?

    I've been wondering about the 32/64 bit and multi-core technologies. So we've had the availability of 64bit technology for over two years, yet the new O/S only now exploits 64bit technology. I ran multiple tasks; video, internet speed tests, etc. and monitored via Activity Monitor. It didn't seem to me SL uses both cores independently. Does anyone know if SL uses both CPU cores for multiple independent tasks or processes?

    To All,
    I would like to add an addendum to Edmund's latest post, as quoted below. This will, perhaps, illuminate what will occur now, and the possibilities for the near future...
    2) as pointed out by someone else if app used Grand Central services. Can't imagine many do yet.
    Again keep in mind MULTIPLE apps, even if single threaded, will be assigned to balance out core/CPU loads. And the kernel/OS itself will use the multiple cores for various services used by the applications.
    This is all true, and I would like to highlight that no third-party applications yet take advantage of GCD (none of which I am aware).
    However, we can almost certainly assume that many of the underlying components of SL, and probably all of those newly "Cocoa-ized" SL applications (including the Finder) will be taking advantage of Grand Central. The benefits will apply directly to all the "native" processes, and will carry over to third-party processes even if only by virtue of the native ones "getting out of the way."
    If you have 3 single-threaded processes actively consuming CPU cycles, and you have two cores, the ability to "packetize" and dynamically dispatch even one of them allows the other two to be handled more efficiently and smoothly.
    We haven't even started mentioning OpenCL. This is still waiting in the wings, for the most part, but what I have seen so far is... almost shocking. There are a few rudimentary "benchmark" apps floating about that do use OpenCL (all command line), and what they show is that my 8600M GT will complete in less than 2 seconds a "general computing" calculation that it takes my 2.5 GHz "Penryn" CPU 14 seconds to complete (using both cores!).
    So, the question becomes: "How much of Snow Leopard is taking advantage of OpenCL?" I think none of it. Not yet. Whether or not we begin to see the leveraging of OpenCL in updates to SL remains to be seen. Let's not step over the line with speculation, here
    Scott

  • Multisim 11 only uses one CPU core during simulation

    Hi,
    Just recently I've been using MultiSim again, v11 if that matters, and simulating goes slow, to my opinion that is.
    Now there are two things that I've noticed:
    - Multisim only uses one of two CPU cores during simulation, not both;
    - I've enabled CPU throttle (AMD's Cool'n'Quiet) but corespeed remains very low even though Multisim needs CPU power while simulating.
    The simulating of a NE555-based LED-flash circuit requires about 28 seconds to get to 100ms in the simulation. Now if I can get the CPU to throttle up, by for example having Prime95 stress one of the two cores, then the simulation only requires 12 seconds to get to 100ms.
    So here are my two questions:
    1. Is Multisim 11 capable of using more than one CPU core while simulating? If yes, how can I get it to use more than one?
    2. Probably not the right forum to ask, but is there a way to force Windows 7 Ultimate x64 to throttle up the CPU when Multisim starts simulating? Since Multisim does not seem to stress the only core it uses enough to throttle it up. Using Prime95 on one core does trigger a throttle up.
    With regards,
    Bart Grefte
    Relevant specs:
    Windows 7 Ultimate x64
    Multisim v11
    AMD A4-3400
    16GB RAM

    Completely forgot this topic....
    Is there a good reason why there is no multicore support? Multisim can certainly use it for simulating.
    As for the CPU throttling, I found a batch that can increase/decrease the "minimum processor state"-setting and it works immediately, no reboot required.
    @echo off
    powercfg.exe -getactivescheme > "%userprofile%\appdata\~apo.tmp"
    set /p v_currentscheme= <"%userprofile%\appdata\~apo.tmp"
    set v_currentscheme=%v_currentscheme:~24,36%
    del "%userprofile%\appdata\~apo.tmp"
    powercfg -setacvalueindex %v_currentscheme% 54533251-82be-4824-96c1-47b60b740d00 893dee8e-2bef-41e0-89c6-b55d0929964c %1
    powercfg -setdcvalueindex %v_currentscheme% 54533251-82be-4824-96c1-47b60b740d00 893dee8e-2bef-41e0-89c6-b55d0929964c %1
    powercfg -s %v_currentscheme%
    exit
    Usage is simple:
    Start -> run -> c:\pathtobatch.bat 100
    for "minimum processor state"-setting at 100%
    Start -> run -> c:\pathtobatch.bat 5
    for "minimum processor state"-setting at 5%
    Please note that "set v_currentscheme=%v_currentscheme:~24,36%" is not the same for every language. The original batch had "set v_currentscheme=%v_currentscheme:~19,36%" (most likely for the English version of Windows 7), but since my Windows install is in Dutch, I had to change it in order for it to work.
    Running the batch with @echo off removed will show why.

  • AE CC not using 4 CPU cores

    I have Win8 Pro 64-bit version and AE CC 64-bit, RAM is 16GB. Processor is Intel i5 2500k, 4 cores.
    And in my task manager I see AE is using only 2 cpu cores, why isn't it using all 4 of them for rendering?

    See this to understand how CPUs are used by After Effects:
    https://www.video2brain.com/en/lessons/optimizing-cpus
    You won't always see all of your CPUs used at 100%. There are a lot of other components to rendering, and the CPU is often not the bottleneck.
    See this page for resources about making After Effects work faster: http://adobe.ly/eV2zE7
    See this page for information about hardware for After Effects: http://adobe.ly/pRYOuk

  • PPro not using all cpu cores evenly

    Hey Guys!
    I've been recently trying to optimize my laptop to get better performance.
    I have a Lenovo Y550P -
    i7 Q 720 1.60 GHz
    8GB Ram
    Nvidia GT240m video card (1gb ram)
    64 bit windows 7
    500 gb 7200 RPM HDD
    I know this is not the ideal setup, but it is what I have.
    I have a massive amount of GoPro footage to process, and was having issues shuttling at 1/4 resolution.  I made some tweaks such as:
    removed windows visual effects
    enabled hard drive write buffering
    disable CPU core parking (laptop thing)
    various tweaks to the services.msc file
    "enabled" my video card for cuda =)
    This has greatly improved my shuttling for 2x and 4x.  however when I watch the performance monitor I notice it uses CPU cores 4 & 7 significantly more than the other cores.  CPU usage only reaches about 40% during shuttling, yet stutters. 
    Here is a screen capture of resource monitor on disks:  I'm currently reading the footage from an external USB 3.0 drive (F), scratch disks are set to internal (C).
    Here is a screen capture of CPU resources, notice how cpu core 4 (start from 0) has higher usage.
    Is there something I can improve on my setup to make my system more efficient, or is all this just because of the HDD bottleneck?  Any advice on improving my limited systems performance would be appreciated.
    Thank you for your time!

    Sorry, here are the images!

  • Cc2014 not using all CPU cores or GPU

    I'm using the new CC2014, and I exported a project straight out of Premiere and it utilized all 8 of my CPU and my GPU. When I export using Adobe Media Encoder, it only uses one core and doesn't seem to use the GPU at all. I tried changing the setting to from GPU to software only, and it didn't seem to make a difference.

    Just found out, that drag and drop PPro Sequences into AME seems to be a work-around.Plus unchecking the "Import Sequences Natively" in AME.
    Well, ..indeed it worked. Not very convenient though and it's a bug that's been around for a while (7 month!! according to some users).
    Please fix this, cannot be too hard, can it?
    What bothers me even more is a still low CPU usage of around 35% over all when exporting....
    My system:
    i7-5930K @ 4.40GHz (overclocked)
    Gigabyte GTX 770 GB
    SSD cache drive
    HDD-RAID0 assets drive
    Windows 8.1

  • Media Encoder CS6 Not using all CPU Cores or RAM

    I'm currently exporting a 90 second video (using the Vimeo HD 1080p preset), and it is taking about 30 minutes. This is extremely strange as I have an i7-990x OC'ed @4.5, 12GB RAM, SSD, etc.
    I looked at Performance tab in Windows Task Manager, and saw 4 threads were not being used at all, only 20-25% of the CPU was in use, and only 6GB out of the 9GB I specified was being used.
    Why isn't Media Encoder making full use of my CPU and RAM?

    I'm having the same problem. But even worst.
    For the test, I take a 10 min .mov or mp4 file, simply modify the "scale", and hit the export.
    a 10 min file is taking 4 hours to render.
    This is on a:
    2012 MacBookPro i7 2.3Ghz, 16 GB 1600 DDR3, SSD disk. - Mac OSX 10.7.5
    and a:
    2011 iMac i7 3.7Ghz, 16 GB 1333 Mhz, 7200 RPM HD - Mac OSX 10.7.5
    The CPU stays at 90% idle the whole time.
    I know this is an old thread, but maybe someone could give me a hand.

  • QT Pro nor iSquint will export using both CPUs - MacBook Pro

    Hello!
    I've got a few videos (of various codecs) that I want to export for my iPod, at the high-res 640x480 level. Because I want more control over export settings than iTunes gives, I usually export with Quicktime Pro, or sometimes iSquint, depending on the quality I want and the codecs used.
    Exporting takes a lot of time - as it does on any computer. Unfortunately, both Quicktime and iSquint seem to refuse to export using both CPU cores in my 2.2 Ghz MacBook Pro. They both suck down the entire power of one core, but the other sits idle. This greatly increases the export time, which is annoying when I've got a lot of videos to convert.
    Oddly enough, when I tried exporting said source movies to a DVD via iDVD, it used both cores. In other words, QT is actually capable of using 2 cores to export videos in iDVD, but not via QT Pro itself - very very odd.
    Is this a "feature" of the program - that is, QT Pro does not support dual-core encoding except through other programs - or some bug? Since iSquint does the same thing, I'm fairly sure the problem does not actually lie with Quicktime as iSquint uses the FFmpeg codecs to convert videos.
    Any ideas what the problem is, and how to work around it? If it's a bug in the software, I can reinstall said application.
    -Dan

    The "grey curtain" indicates a kernal panic. KPs are usually the result of hardware based problems or unrecoverable software issues that are system wide. I say system wide because OSX is designed to contain a balky program and will allow you to kill the program without affecting the OS.
    Unplug all external devices except the keyboard and mouse and try it.
    Does it work reliably?
    • If so, start connecting your externals one by one until you find the one that causes issues.
    • if not, you can try a full erase and reinstall of the software and OS. This may help.
    If not, it is likely that you have a hardware issue that will need to be addressed by a certified repair technician.
    good luck.
    x

  • Webform loading too long : only 1 cpu core used out of 16 core available?

    Hi,
    We have one webform which is quite heavy and takes 1min30 to load :
    1 minute for essbase calculation (spreadsheet extractor)
    30 seconds for network & browser rendering
    We have seen that Essbase is only using 1 cpu core (100% usage) out 16 core available
    Does it is possible to give some calculations to other cpu core? I know it's possible for HBR rules with CALCPARALLEL but for a webform?
    Thanks!
    Edited by: Tipiak on 3 juil. 2012 23:06

    The system has no ability to use multiple threads for one web form. It sounds as if your form is not designed for optimal performance which usually means you have a large number of total cells and/or are not using dense dimensions for your rows and columns (usually accounts/periods).
    Suggest you refer to the KB article which discusses Hyperion Planning Form design at:
    https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=779502.1
    Regards,
    john A. Booth
    http://www.metavero.com

  • Running both ARM core and Microblaze Together

    Hi,
    I want to use both arm core processor_0 and a Microblaze core.. At the moment I am running a timer interrupt on Zqnc arm and Microblaze is printing somthing using JTAG UART.
    When i first run the Mircoblaze first and then ZYNQ arm core, everything works fine.
    But when i run ARM core first and then Microclaze, then the ARM interupts stop. It seems like ARM core get stuck.
    Can anyone help me with this?
    I am using 2014.2 VIvado and SDK
    Thank you

    Hello,
    You have a few options with regards to passing arguments between VIs running on your host and Luminary Micro board.  You would be able to use serial communication or ethernet to transfer data between your board and your PC.  You could alternatively use digital pins themselves, but this is more intensive as a custom protocol would have to be written.
    As for getting the VIs to start at the same time, here are some options that you can look at ( you are not limited to these):
     - get your ARM target to wait for a tcp packet from the host machine before it starts executing its main code.  That way manually starting your host VI will activate the ARM code
     -  Try using VI Server functions on the ARM target while you are developing your code, perhaps it will send the necessary commands to the host PC to open a particular VI (it doesn't stop you from placing this function to your target)
    Let us know if this helps!
    Kameralina
    Ask NI (ni.com/ask)
    Search The KnowledgeBase
    NI Developer Zone
    Measure It. Fix It. ni.com/greenengineering/
    NI Vision ni.com/vision/

  • Mathematica 5.2.2 uses only one core

    Using Activity Monitor, I have seen that Mathematica 5.2.2 uses only the second core, even if Wolfram says that it supports multi-core calculations.
    So, firstly, I use only half of the power of my MBP C2D and secondly this could be dangerous for the processor because I use Mathematica all the day for work and this asymmetric use of the processor could damage it.
    Do somebody know how to use both the cores?

    This is a Mathematica problem. Contact their technical support. The Apple Discussions are limited to assistance with Apple products. We can't fix a problem with Mathematica software.
    Furthermore, there's no reason why the use of one core will damage your processor.

  • Windows XP Mode (Virtual PC) only uses half a CPU core/thread.

    Not only does windows xp mode not support multiple cores, to me it looks like it only uses half a core/thread.
    I noticed this because I just setup windows xp mode (virtual pc) on my computer.  I updated it, joined it to our domain and then installed our legacy ERP software.  This ERP software was designed and programmed to run on windows 98 or 2000, I can't
    remember right now, but its hardware requirements are very low.  This software runs very slows in the virtual pc, with the processor being the problem.  
    Matching the processor spike on the virtual pc using both performance monitor and task manager I compared it to task manager running on the host (my windows 7 machine).  The spikes in cpu matched one of the threads on my i7 processor (I use threads
    because an i7 has 8 threads but only 4 cores).  To my surprise when the CPU reached near 100% on the virtual PC the same spike on the host was only about 50% of the cpu thread.
    The host has a i7 950, 3.07GHz processor.  So this means that the virtual pc can only use up to 1.5GHz.  I have not run into the problem reported in other posts on this forum where the CPU is always at 50% or 100% on the virtual PC because when
    their is no activity the CPU drops to 10% or less and stays there.
    I will now have to use vmplayer, problem is having enough xp licenses for all users.
    Was anyone else aware of this?

    You can't use a half a thread.  Windows VPC uses one core at 3.07GHz, there's no halving the speed of the CPU. 
    The only way WVPC would use a lower clock speed is if there is power management throttling the CPU speed down.  Do you have a power management setting throttling your CPU?  Are you sure the problem is the CPU and not disk or network related? 
    Task manager on the host is not a very accurate way to determine CPU usage of the VM.
    If you have the correct version of Windows 7 (Pro or higher) there are no issues with XP Mode licenses, you get one license included with Windows 7.  There's no restriction requiring you to use XP Mode with Windows VPC, it will work under 3rd party
    virtualization solutions.

  • Command is used to get the usage of each CPU core in multi core cpu

    Hi,
    Using sar command how can i get usage of each cpu core.
    -Thanks

    The best way to do this is to put the monitor name as a property bag in the script and pass that to your event details. Otherwise, we're looking at querying the database each time the monitor generates an event, and this is overhead that is really not
    necessary. The other option, which is just even worse in terms of performance, is to use powershell to query the SDK for the monitor name. Both of these options are not going to be a good solution, because now you need to implement action accounts that can
    either query the database or the sdk.
    Jonathan Almquist | SCOMskills, LLC (http://scomskills.com)

  • Logic 8 seems not be using both cores??

    I've been a lot of System Overload messages so I started monitoring the CPU usage and I've noticed that Logic is only using one core. When that maxes out I get the message. I've done all the things I should be doing (freezing tracks, raising the I/O buffer size etc...) but I dont seem to be getting both cores...any thoughts?
    Here's my system:
    Dual 2GHz G5 w/6GB RAM, 10.5.8.

    Hi,
    Logic distributes different "tasks" to both CPU's. Usually, one Channelstrip goes to one CPU. Logic can't split one Channelstrip to go to different CPU's.
    Solution:
    If you have a Channelstrip overloaded with plugins, set the channelstrip's output to a BUS. Use the resulting AUX channelstrip to distribute the plugin chain over both the main and the AUX channelstrip.
    Signal flow: Main channelstrip -> AUX channelstrip -> Out 1-2
    Logic can and usually will distribute the CPU power to both strips.
    Another thing: If you are not using a software instrument, switch to an empty audio track in the arrange window while playing back your project. Active (armed) software tracks are using more CPU ressources because Logic waits for your live MIDI input.
    Fox

  • I have 12 core with Quatro 4000 and 5770, I want to use dual monitor setup, monitors are NEC with Spectraview-II.  How do I connect?  4000 only has 1 Display Port and 1 DVI.  5770 has 2 of each, if I use both 5770 Display Ports, does the 4000 contribute?

    I just bought a 12 core with Quatro 4000 and 5770, I want to use dual monitor setup, monitors are NEC with Spectraview-II.  How do I connect?  4000 only has 1 Display Port and 1 DVI.  5770 has 2 of each, if I use both 5770 Display Ports, does the 4000 contribute any work at all?  I read where on a PC they would work together, but on a MAC they do not.
    I read that Display Port has higher band width than DVI, NEC monitors for best performance they recommend using DIsplay Port.
    When I was setting this up I looked at a Nvidia Quadro 4000, unfortunately it was for PC, it had 2 Display Ports, in the Mac version they reduce it to one.  I did not think there could be a difference.
    Mainly want to use it for CS6 and LR4.
    How to proceed??? 
    I do not want to use the Quadro 4000 for both, that would not optimize both monitors, one DP and 1 DVI.  Using just the 5770 would work but I do not think the 4000 would be doing anything, and the 5770 has been replaced by the 5870.more bandwidth.
    Any ideas, I am a Mac newbie, have not ever tried a Mac Pro, just bought off ebay and now I have these problems.
    As a last resort I could sell both and get a 5870.  That would work, I'm sure of that, it's just that I wanted the better graphics card.
    Thanks,
    Bill

    The Hatter,
    I am a novice at Mac so I read all I can.  From what I understand the NEC monitors I bought require Display Port for their maximum performance.  The GTX 680 only has DVI outputs.  Difference from what I understand is larger bandwidth with the DP.
    You said I have the 4000 for CUDA.  I am not all that familiar with CUDA and when I do read about it I do not understand it. 
    A concern I have is, that if I connect the 2 high end NEC monitors via the 5770, using it's 2 Display Ports I would have nothing connected to the 4000.  Is the 4000 doing anything with nothing connected?  I read where in a PC system the 2 cards would interact but in a Mac system they do not.
    Bottom line, as I see it, the 4000 will not be useful at all to me, since I want a dual monitor set-up.
    So far the 5870 seems the best choice, higher band width than the 5770, and it has 2 Display Ports to optimize the NEC monitors.
    I'm not sure how fine I am splitting hairs, nor do I know how important those hairs are.  I am just trying to set up a really fast reliable system that will mainly be used for CS6 and LR4.  Those NEC monitors are supposed to be top notch.

Maybe you are looking for