Remote configuration question

Hello! I have several questions about configuration in my system that is shown on picture.
I need to have a possibility to program the second FPGA(Virtex-6) and its flash memory. I'm going to send mcs or bit file via ethernet to FPGA 1(Kintex-7) then programming in JTAG-mode FPGA 2 or its flash. First of all, i need to develop JTAG configuration logic at FPGA-1. Then, i have a questions:
1) Is it possible to detect in the jtag chain a flash memory of the  FPGA 2 and programm it via FPGA-1? Or its only possible to programm it only using Impact and jtag-programmer?
2) I would like to know if my jtag-configuration logic (at FPGA-1) will have mistakes, is it possible to damage fpga-2  by sending wrong sequences of bits during configuration it "on fly"?
 

XSVF is something like taking a straight-forward iMPACT programming process, and then recording the transitions of the JTAG signals.  Then what you do is to "play back" the recording to make the same thing happen within your target system.  Pretty much anything you do in iMPACT including indirect flash programming (SPI or BPI) can be converted into XSVF.
You could also roll your own JTAG conversion code, but I think that will take a lot more effort.  I would not be too worried about damaging the FPGA, however.  Typically errors in the configuration process are detected as CRC errors and prevent the part from running bad codes.

Similar Messages

  • SAP-JEE, SAP_BUILDT, and SAP_JTECHS and Dev Configuration questions

    Hi experts,
    I am configuring NWDI for our environment and have a few questions that I'm trying to get my arms around.  
    I've read we need to check-in SAP-JEE, SAP_BUILDT, and SAP_JTECHS as required components, but I'm confused on the whole check-in vs. import thing.
    I placed the 3 files in the correct OS directory and checked them in via the check-in tab on CMS.   Next, the files show up in the import queue for the DEV tab.  My questions are what do I do next?
    1.  Do I import them into DEV?  If so, what is this actually doing?  Is it importing into the actual runtime system (i.e. DEV checkbox and parameters as defined in the landscape configurator for this track)? Or is just importing the file into the DEV buildspace of NWDI system?
    2.  Same question goes for the Consolidation tab.    Do I import them in here as well? 
    3.  Do I need to import them into the QA and Prod systems too?  Or do I remove them from the queue?
    Development Configuration questions ***
    4. When I download the development configuration, I can select DEV or CON workspace.  What is the difference?  Does DEV point to the sandbox (or central development) runtime system and CONS points to the configuration runtime system as defined in the landscape configurator?  Or is this the DEV an CON workspace/buildspace of the NWDI sytem.
    5.  Does the selection here dictate the starting point for the development?  What is an example scenarios when I would choose DEV vs. CON?
    6.  I have heard about the concept of a maintenance track and a development track.  What is the difference and how do they differ from a setup perspective?   When would a Developer pick one over the over? 
    Thanks for any advice
    -Dave

    Hi David,
    "Check-In" makes SCA known to CMS, "import" will import the content of the SCAs into CBS/DTR.
    1. Yes. For these three SCAs specifically (they only contain buildarchives, no sources, no deployarchives) the build archives are imported into the dev buildspace on CBS. If the SCAs contain deployarchives and you have a runtime system configured for the dev system then those deployarchives should get deployed onto the runtime system.
    2. Have you seen /people/marion.schlotte/blog/2006/03/30/best-practices-for-nwdi-track-design-for-ongoing-development ? Sooner or later you will want to.
    3. Should be answered indirectly.
    4. Dev/Cons correspond to the Dev/Consolidation system in CMS. For each developed SC you have 2 systems with 2 workspaces in DTR for each (inactive/active)
    5. You should use dev. I would only use cons for corrections if they can't be done in dev and transported. Note that you will get conflicts in DTR if you do parallel changes in dev and cons.
    6. See link in No.2 ?
    Regards,
    Marc

  • Configuration question on css11506

    Hi
    One of our vip with 4 local servers, currently has https. the http is redirected to https.
    Now, my client has problem which a seriel directories need use http, not https. some thing like. quistion:
         1. If there is any possible, I can configure the vip to filter the special directories and let them to use http not https. and rest pages and directories redirect to https?
         2. If not, I can make another vip to use same local servers, but, is possible to only limited to special directories? and with wild code? some like the directories are partially wild coded, something like, http://web.domain/casedir*/casenumber?
         3. if not on both option, is any way I can fix this problem?
    Any comments will be appreciated
    Thanks in advance
    Julie

    I run my Tangosol cluster with 12 nodes on 3
    machines(each machine with 4 cache server nodes). I
    have 2 important configuration questions. Appreciate
    if you can answer them ASAP.
    - My requirement is that I need only 10000 objects to
    be in cluster so that the resources can be freed upon
    when other caches are loaded. I configured the
    <high-units> to be 10000 but I am not sure if this is
    per node or for the whole cluster. I see that the
    total number of objects in the cluster goes till
    15800 objects even when I configured for the 10K as
    high-units (there is some free memory on servers in
    this case). Can you please explain this?
    It is per backing map, which is practically per node in case of distributed caches.
    - Is there an easy way to know the memory stats of
    the cluster? The memory command on the cluster
    doesn't seem to be giving me the correct stats. Is
    there any other utility that I can use?
    Yes, you can get this and quite a number of other information via JMX. Please check this wiki page for more information.
    I started all the nodes with the same configuration
    as below. Can you please answer the above questions
    ASAP?
    <distributed-scheme>
    <scheme-name>TestScheme</scheme-name>
    <service-name>DistributedCache</service-name>
    <backing-map-scheme>
    <local-scheme>
    <high-units>10000</high-units>
    <eviction-policy>LRU</eviction-policy>
    <expiry-delay>1d</expiry-delay>
    <flush-delay>1h</flush-delay>
    </local-scheme>
    </backing-map-scheme>
    </distributed-scheme>
    Thanks
    RaviBest regards,
    Robert

  • Configuration Question on  local-scheme and high-units

    I run my Tangosol cluster with 12 nodes on 3 machines(each machine with 4 cache server nodes). I have 2 important configuration questions. Appreciate if you can answer them ASAP.
    - My requirement is that I need only 10000 objects to be in cluster so that the resources can be freed upon when other caches are loaded. I configured the <high-units> to be 10000 but I am not sure if this is per node or for the whole cluster. I see that the total number of objects in the cluster goes till 15800 objects even when I configured for the 10K as high-units (there is some free memory on servers in this case). Can you please explain this?
    - Is there an easy way to know the memory stats of the cluster? The memory command on the cluster doesn't seem to be giving me the correct stats. Is there any other utility that I can use?
    I started all the nodes with the same configuration as below. Can you please answer the above questions ASAP?
    <distributed-scheme>
    <scheme-name>TestScheme</scheme-name>
    <service-name>DistributedCache</service-name>
    <backing-map-scheme>
    <local-scheme>
    <high-units>10000</high-units>
    <eviction-policy>LRU</eviction-policy>
    <expiry-delay>1d</expiry-delay>
    <flush-delay>1h</flush-delay>
    </local-scheme>
    </backing-map-scheme>
    </distributed-scheme>
    Thanks
    Ravi

    I run my Tangosol cluster with 12 nodes on 3
    machines(each machine with 4 cache server nodes). I
    have 2 important configuration questions. Appreciate
    if you can answer them ASAP.
    - My requirement is that I need only 10000 objects to
    be in cluster so that the resources can be freed upon
    when other caches are loaded. I configured the
    <high-units> to be 10000 but I am not sure if this is
    per node or for the whole cluster. I see that the
    total number of objects in the cluster goes till
    15800 objects even when I configured for the 10K as
    high-units (there is some free memory on servers in
    this case). Can you please explain this?
    It is per backing map, which is practically per node in case of distributed caches.
    - Is there an easy way to know the memory stats of
    the cluster? The memory command on the cluster
    doesn't seem to be giving me the correct stats. Is
    there any other utility that I can use?
    Yes, you can get this and quite a number of other information via JMX. Please check this wiki page for more information.
    I started all the nodes with the same configuration
    as below. Can you please answer the above questions
    ASAP?
    <distributed-scheme>
    <scheme-name>TestScheme</scheme-name>
    <service-name>DistributedCache</service-name>
    <backing-map-scheme>
    <local-scheme>
    <high-units>10000</high-units>
    <eviction-policy>LRU</eviction-policy>
    <expiry-delay>1d</expiry-delay>
    <flush-delay>1h</flush-delay>
    </local-scheme>
    </backing-map-scheme>
    </distributed-scheme>
    Thanks
    RaviBest regards,
    Robert

  • Problem creating project from a remote configuration

    Hello,
    i have developed a DC with a remote configuration. After having released my development activities adn transproted it into the test and consolidation systems, i would like to go on developing. If i start the NWDS again and log onto the DTR a message is displayed, that there is a new version of my DC and i should remove and remimport the DC.
    After doing so, i get the new remote DC, but my project is gone. So i tried to create a new project of this DC. During the project creation process i see many errors in the task bar: "failed to resolve reference" and "could not load used dc". But i think more important i get a popup saying, that the file dcdef is readonly and mandatory changes can not be made.
    Can anyone help me with this!

    Hi Pran, Ramakrishna
    sorry for answering so late and thank you for your help. I did as you told me, but i can´t see anything wrong in the configuration nor can i get any further.
    Pran:
    I have done the DTR action on .dcdef file as you told me. Afterwards i did not get any exception regarding a readonly reason, but i can not rebuild the newly created project because of missing DCs.
    "remove and import procedure"
    After starting the developer studio and logging into the JDI there was a message, that there is a new configuration version active on the JDI and i should remove my local configuration and import the new remote configuration again. I think this is a normal procedure after having transported your DC.
    For now i have stopped developing with the JDI. Maybe giving it a try sometime again. How is your experience with JDI?
    Thank you for your help, again!
    Best regards,
    Christian

  • Airport extreme remote configuration

    Can anybody help in enabling airport remote configuration?

    Yes, this is possible.
    The setting on your Airport Extreme you want is called "Allow setup over WAN" (on the "BaseStation" tab). Check this box and click "Update". You may get a security warning when you try and save this setting - read it and then decide whether you're happy ignoring the warning.
    Now, when you're outside the network, launch Airport Utility and choose File->"Configure Other...". Enter the external IP address of your Airport Extreme and the password... and you're in.

  • Remote configuration

    I'd like to set up a wireless system for my residential tenants in another state. After I go on site and install the hardware, can I administer the AirPort Extreme Base Station over the internet from out of state?
    Power Mac G4   Mac OS X (10.4.8)   clients will be Macs and PCs

    Jose,
    To answer your second question yes, you can configure WebLogic Workshop to
    connect to an remote WebLogic Server instance. I have added some information
    regarding this to this post. Regarding the first question, can I install
    WebLogic Workshop without WebLogic Server, currently this cannot be done.
    The installers are not designed to allow this.
    Listed below is a solution (S-20563 ) I've created which is available on
    AskBEA site
    http://support.bea.com/application?namespace=askbea&origin=ask_bea.jsp&event=button.search_ask_bea&askbea_display=relevancy&askbea_max_number_returned=50&question=How+do+I+connect+to+a+remote+WebLogic+Server+from+Workshop&all_bea_products=all_bea_products&ES=ES
    Hope this helps.
    Thanks
    Raj Alagumalai
    WebLogic Workshop Support
    "Jose" <[email protected]> wrote in message
    news:3f5cdc0e$[email protected]..
    >
    I would like to know if I can install WebLogic Workshop 8.1 withoutinstalling
    WebLogic Server 8.1.
    I mean, can I configure WebLogic Workshop to use an instance of WebLogicServer
    that is running on a remote machine?
    Thanks in advance.

  • SQL server remote configuration

    Hi,
    We are trying to install just the BIDS in a user machine to be able to develop reports remotely. I have a couple of questions.
    1. In my current scenario, SQL server 2008 R2 is already setup with the reporting services in say, Machine1. Can it be reconfigured so that
    a user in Machine2, trying to develop a report, access report server DB in Machine1? 
    2. Can this be done in such a way that, first we want to point BIDS to a sample database (like Adventureworks etc.) in Machine1, before trying
    to develop reports with the actual database?
    Could anyone help with the detailed steps involved. 
    I have done development of reports with an SQL server installation and configuration in the same machine sometime before, but not remote.
    I looked for manuals, there is help available, but for this scenario, I want to confirm with the steps involved.Any input in this regard would be extremely helpful.
    Thanks,
    SK

    Hi valueinfo,
    As per my understanding, you have a machine named machine1 with Reporting Services in SQL Server 2008 R2 installed. When the users create reports using BIDS,  you want to allow them to use the database installed on machine1 remotely. If that is the
    case, we need to allow remote connections to this server, then add new login to the database. For detail information, please refer to the following steps:
    In SQL Server Management Studio, right-click the instance and open properties dialog box.
    Click Connections, in Remote server connections section, check Allow remote connections to this server check box.
    To configure database access, expand the Security folder, right-click Logins, and then click New Login.
    In the Login – New dialog box, specify SQL Server Authentication mode.
    Type a logon name and password, and then confirm the password.
    In the left pane, click Database Access.
    In the right pane, select the Permit check box for the databases you are granting access to, and then click OK. 
    If we want to use Windows Authentication, we need to be members of the local Administrators or users group.
    Reference:
    How to enable remote connections in SQL Server
    How to add a SQL Server Login
    If there is any misunderstanding , please feel free to let me know.
    Best Regards,
    Wendy Fu

  • AD Connector Remote Manager Question

    all,
    trying to install MSFT AD BASE 91170 connector on OIM/OAM 11.1.1.3 environment. Finished the following steps thus far:
    1. created OIM/OAM/AD server environments
    2. Created OIMGroup and admin user account association
    3. Imported the connector
    4. Update ADITResource
    5. Copied ldapbp.jar and ran uploadjars.sh script
    6. Updated search base in Group Lookup Recon and Organization Lookup Recon jobs
    7. I was able to provision a user
    I have two questions:
    1. section 2.2.2.1 (on page 2-14 connector indicates that i need to run installation of remote manager on the AD server). Is this step and the subsequent steps required to be configured. What else do i need to run as part of installation. If the rest of the steps are optional in what cases do they need to be created?
    2. My design console Lookup.ADReconciliation.GroupLookup does not have any values, it appears recon did not work in this case. What could i be doing wrong, i can add configuration details if needed. - I have done this before but not sure what i missed this time.
    Thanks in advance,
    Prasad.
    Edited by: Prasad on Oct 25, 2011 11:48 AM

    Sagar,
    I ran the group lookup recon task several times yesterday. OIM did not populate the lookup. Today i change the recon type from Refresh to Update and changed it back to Refresh and it worked with few exceptions like the one below:
    Overall now the records are there, but it is unclear why the original task executions did not pull anything. I did not see any other exception either yesterday.
    <Insert failed.><Oct 26, 2011 10:56:27 AM EDT> <Error> <OIMCP.ADCS> <BEA-000000> <Description : Insert failed.>
    <Oct 26, 2011 10:56:27 AM EDT> <Error> <OIMCP.ADCS> <BEA-000000> <Thor.API.Exceptions.tcAPIException: Insert failed.
    at com.thortech.xl.ejb.beansimpl.tcLookupOperationsBean.addLookupValue(tcLookupOperationsBean.java:1357)
    at Thor.API.Operations.tcLookupOperationsIntfEJB.addLookupValuex(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor1896.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.bea.core.repackaged.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:310)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149)
    at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:131)
    at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:119)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethod

  • ASA 5505 VPN configuration question

    I have a asa 5505 v7.2(3) asdm 5.2(3) th I am trying to get reconfigured after our cable company was bought out and they replaced the cable modem with a router. My asa now has a non routable "10" address on the outside instead of one of the 5 statics I have assigned to me. I have natted my servers, but I cannot get my vpn clients connected. I am not sure how to get one of my statics assigned to the asa to use for the VPN tunnel. Used to be I just tunneled to the static "outside" address with my Cisco VPN clients (remote pc's). I tried assigning one of my statics to the outside, but then I had no connectivity at all since there is a router now before me, where it was just a modem before. I am used to working on larger pix's with my own IP address range, and not used to dealing with DHCP assigned outside addresses, so I am sure it is something simple I am missing. Any help would be greatly appreciated, this is for a small charity animal shelter, that has been down since the cable company made their "transparent change" when the bought another one out.
    The ISP router has an interface with one of my static on the outside facing interface, and a 10 address on the interface directly connected to my ASA. The ISP router then assigns a 10 address to my outside interface on the ASA. I then have 192 addresses on my inside interfaces with statics for their servers. I am just not sure now how to connect my VPN clients since I do not have a routable outside address anymore. I have tried connecting to the static on the ISP hinking they might pass the packet, but they don't. I thought maybe a loopback could be assigned to the ASA, but could not see a way to do that. also the ethernet interfaces cannot have address assigned, only vlans, which there can only be two, and both are used (inside, outside) so I am out of ideas.
    Thanks for any help
    Thanks much

    Hi Kevin
    Your current design causes administrative overhead. You either need one-to-one mapping with outside int or a PAT which is forwarding UDP 4500 and TCP 10000 (may cause troubles in GRE)
    Ask your ISP to configure the router in bridged mode and let your outside interface have the public IPs instead 10.x.x.x
    Regards

  • ASA VPN configuration question

    I am trying to configure a VPN tunnel to a remote 3rd party site from an ASA. I have set up a new tunnel group
    But it seems to be trying to use the DefaultRAGroup and then the Defaultl2lGroup one. What do I need to do to ensure it uses the new one I have set up ?

    The name of the tunnel-group has to be the ip address of the remote gateway. With that, the ASA can match the IPsec packets to the correct tunnel-group.

  • Closed loop configuration question

    I have a motor(with encoder feedback) attached to a linear actuator(with end limit switches).
    The motor has a commercially bought servo drive for control. 
    The servo drive will accept either a step/direction (2 seperate TTL
    digital pulse train inputs) or an analog -10 to 10vdc input for
    control. 
    The purpose is to drive a linear actuator(continiously in and out) in
    closed loop operation utilizing a ( (SV) Setpoint variable)value from a
    file converted to a frequency to compare with an actual ( (PV) Position
    variable) measured frequency.
    I have created and experimented with individual vi's allows analog
    control and digital pulse train control (thankfully with the help of
    examples). 
    Before I pose my question, I would like to make the following
    observations:  It is my understanding that Closed loop control
    means that I dont need to know an exact position at which to drive, but
    constant comparision of PV and SV through PID applictation. 
    Without getting into any proprietery information I can say that the
    constant positioning of the linear actuator will produce a latency of 2
    to 3 seconds based on the time the actuator moves to a new position and
    when the PV will change.  While experimenting with the analog
    input, i noticed imediate response to motor velocity, but after the
    motor is stopped, position is not held in place.  However, while
    experimenting with the Digital pulse train input, I noticed that the
    servo drive can only accept one command at one time; if, halfway
    through a move, position error produces a response to move the linear
    actuator in the opposite or different direction, the origional move
    must finish first. 
    Can anyone recommend the proper configuration for the closed loop control i have described?
    If I can make the system work with the servo drive/motor I plan to use
    the simple (pci 6014) daq card with the Analog out, or utilize the
    digital out.
    If I cant get this to work, we do have a pxi with 7344 motion card(I
    would like to exhaust all efforts to use the PCI 6014 card).
    Depending on where I go from here, I planned to use the PID vi's for the loop control.
    Thanks,
    Wayne Hilburn

    Thanks for the reply
    Jochen.  I realize there is a built-in latency with windows but I
    think the I/O control would be ok.  A change in actuator position
    will not result in an immediate change in process variable;  Is
    there a way to measure the latency or is it calculated?  A
    satisfactory reaction time could be from 1 to 1.5 sec.
    Use of the PCI-6014 is to supply the control output to the servo
    drive/amp, and not to drive the motor itself.  As stated earlier,
    while using the 6014 board, I have the choice of digital or analog
    output.
    Currently I am at a point where I must choose which configuration,
    analog control or digital control(in the form of digital pulse train),
    (i am inserting from first message)
    While experimenting with the analog
    input, i noticed imediate response to motor velocity, but after the
    motor is stopped, position is not held in place.  However, while
    experimenting with the Digital pulse train input, I noticed that the
    servo drive can only accept one command at one time; if, halfway
    through a move, position error produces a response to move the linear
    actuator in the opposite or different direction, the origional move
    must finish first.  .
    I dont claim to understand all the limitations with the
    specific boards, however, i am using an approach that is showing me the
    characteristics(a couple are listed in the above paragraph)  of
    the hardware and software configurations.
    So I am really back to my origional question;  Which configuration
    would be better for closed loop control, analog or digital pulse train?
    Thanks,
    Wayne Hilburn

  • Multiple Oracle Configuration Question

    We have a typical environment setup. I will explain it below:
    Our application works in Online and Offline Mode. For Online mode we connect to Oracle 10g enterprise server and a Local instance of Access and In offline application totally works in Access.
    Now we want to move away from Access and have Oracle PE instead just because we want to use stored procedure and same set of code for offline and online processing.
    So a typical user machine will have a PE instance and a Oracle Client. Currently we use LDAP.ora for Configuring connections. Now i have few questions
    1. How do we ensure that Oracle PE will work when we don't have network connection. Can we have like PE setup with Tnsnames.ORA
    2. What can be the smallest possible package for PE.
    3. Can I use one client to access both PE and Server databases.
    Any help will be highly appreciated.
    Thanks in advance.

    Assuming the "Xcopy installation" refers to using the Windows xcopy command, can you clarify what, exactly, you are installing via xcopy? Are you just using xcopy to copy the ODP.Net bits? Or are you trying to install the Oracle client via that approach?
    If you are concerned about support, you would generally want to install everything via the Oracle Universal Installer (barring those very occasional components that don't use the OUI). Oracle generally only supports software installed via the installer because particularly on Windows, there are a number of registry entries that need to get created.
    You can certainly do a custom install of the personal edition on the end user machines. There are a few required components that I believe have to be installed (that the installer will take care of). I assume your customization will take the form of a response file to the OUI in order to do a silent install?
    Justin

  • CCMS configuration question - more than one sapccmsr agent on one server

    Hello all,
    this might be a newbie question, please excuse:
    We have several SAP systems installed on AIX in several LPARs. SAP aplication server and SAP database is always located in different LPARs, but one LPAR can share application server of several SAP systems or databases of several SAP systems.
    So I want to configure SAPOSCOL and CCMS-Agents (sapccmsr) on our databse LPARS. SAPOSCOL is running - no problem so far. Due to the circumstance that we have DBs for SAP systems with kernel 4.6d, 6.40 (nw2004), 7.00 (nw2004s) I want to use two different CCMS-Agents (Version 6.40 non-unicode to connect to SAP 4.6d and 6.40 + Version 7.00 unicode to connect to SAP 7.00).
    AFAIK only one of these can use shared memory segment #99 (default) - the other one has to be configured to use another one (e.g. #98) but I don't know how (could'nt find any hints on OSS + Online Help + CCMs-Agent manual).
    Any help would be appreciated
    regards
    Christian
    Edited by: Christian Mika on Mar 6, 2008 11:30 AM

    Hello,
    has really no one ever had this kind of problem? Do you all use either one (e.g. windows) server for one application (e.g. SAP application or database) or the same server for application and database? Or don't you use virtual hostnames (aliases) for your servers, so that in all mentioned cases one CCMS-Agent on one server would fit your requirements? I could hardly believe that!
    kind regards
    Christian

  • Master iPad configurator question concerning cart syncing with different versions of iPads.

    I have a question concerning configurator syncing.Can the master iPad be a different version of iPad than the other synced iPads? For instance can iPad 2 be the master iPad to a group of iPad Air's? The iPad 2 has some fewer capabilities than the Air, would some settings or restrictions be left off of the iPad Airs if they were set up this way?  Thanks.

    There is no such thing as 'master iPad'.  If you're using Configurator or Profile Manager control of the setup is done from a Macintosh.

Maybe you are looking for

  • Image processor

    Trying to use image processor in bridge. Go to tools-->photoshop--> image processor....then PS opens up but window in PS will not open . I have used this tool in CS 5 and no problems. HELP!

  • ITunes doesn't work with MAC OS X 10.7

    Since I installed MAC OS X 10.7 I can't open iTunes. I already downloaded the final update. This is what it says: Process:         iTunes [1537] Path:            /Applications/iTunes.app/Contents/MacOS/iTunes Identifier:      com.apple.iTunes Version

  • Adobe Echosign included in Adobe Full CC?

    HI all, i was just going through acrobat. Learning how to sign documents etc when I came across Echosign. It's a superb solution. I tried a few documents with friends to see how it works. I am a full cc subscriber but not sure if Echosign is a part o

  • HTTP tunneling is disabled in weblogic 9.2

    I have restarted my weblogic server on last saturday, after that i am getting the following exception continuously. ####<Jun 5, 2010 11:27:44 PM MEST> <Info> <ServletContext-/bea_wls_internal> <aberdeen> <IAEABEADMP> <[ACTIVE] ExecuteThread: '1' for

  • SRM Product Category replication

    Hi, I have a problem while replicating the product category from SRM 5.0 to R/4 4.7 system. When I run the load the entry is stuck in the queue and the error message says: The current application triggered a termination with a short dump. I have rese