Issues with running controller-agent config on OLT .

Hi All,
I have 3 machines total. One is windows 2008 (controller / server) and other two are RHEL 5.2(agents). I have the same script in all 3 locations. Running the scripts individually in all 3 environments works fine. The scripts read the local databanks which is csv that has the paths for JAVA_HOME to execute jar and generate secret key. From controller(windows box) I clicked on Manage --> Systems and added the agent details and was successfully done. From controller, I added the script, set the system pointing to agent's name and ran the test, the scripts fail at the point where the secret key gets generated using the Jar. My guess is controller is able to read the local path from CSV and not able to generate the secret keys using local java_home and jar. Can anyone kindly comment on this issue.
Thanks,
Nags.

We are running SCOM 2012 server and have deployed the agent successfully to a number of Red Hat Linux servers. I am having an issue on about a quarter of the hosts, in that they appear as HEALTHY but are Grayed out and not green. When I look at the /var/opt/microsoft/scx/log/omiserver.log
file I see:
WARNING: wsman: authentication failed for user scom2012
I have verified that the system account is setup with the correct password and the runas account is setup with the correct password (i am able to deploy the agent from the SCOM server using it, so the passwords DO match).
Any ideas? 
I've seen this on a few systems here when the agent has been upgraded but the old agent process does not die off.  Just to rule it out, pick a node, make sure there are no instances of scxcimserver or scxcimprovagt and then start the agent and
see if the issue goes away.  I've also seen wsman authentication failures related to the libssl issue that was fixed in yesterday's release.

Similar Messages

  • Issue with re-skilling agents in bulk

    We tried to re-skill about 30 agents in bulk and change the competence level at the same time. Uccx took the change but, we noticed that agents were in a ready state even with calls were in the queue.
    We went back and one by one re-skilled them again and at that point I'm told it worked...agents received calls.
    Anyone ever had any issues with re-skilling agents in bulk?
    We are on UCCX 8.5.1 SU3
    Thanks,
    Dan

    Hi PARADOX-RED,
    Based on the error messages that you provided, please remove the connector software from Control Panel completely. Then, in the problematic computer, please locate
    to: Control Panel-> System-> select Change settings. And select the Computer Name tab in the System Properties dialog box, click
    change button. Then in the Computer Name/Domain Changes dialog box, please take the computer out of the domain as a workgroup, and rejoin in new domain.
    Please check if this issue can be solved. If it still exists, please don’t hesitate to let me know.
    Hope this helps.
    Best regards,
    Justin Gu

  • HT3131 Are there any ventilation issues with running a macbook air in clamshell mode?

    Are there any ventilation issues with running a macbook air in clamshell mode?

    ...that makes sense to me. 
    but, why do so many MBA users (on this forum and others) claim that ventilation occurs through the keyboard?  Is this claim based on an old macbook design?
    for example, here's a related discussion (although fairly old): 
    11-08-2010, 12:16 PM
      #6 (permalink)
    SP Forsythe 
    Notebook Evangelist
    Join Date: Jul 2007 
    Location: California
    Posts: 660
    Rep Power: 14
    Re: Is it safe to use MBP with screen lid closed? Leave charge on always?
    Quote: 
    Originally Posted by tHE j0KER
    Actually I you shouldn't close the lid while running on an external screen. The keyboard of the Macbook is an air intake for the fan. Close the lid, and it could overheat.
    A common misconception of unknown origin. The intake and the exhaust for the cooling fan on the 13" MB and MBP are both located on the rear slot at the base of the hinge. In fact, you can actually see the divider that separates the intake flow from the output flow. If it were through the keyboard crevices, then an awful lot of overheats would result from people using impermeable keyboard covers, as well as Apple's warranty department would be flipping out over the Apple store carrying such covers. iSkin ProTouch FX Keyboard Cover for all MacBooks - Black Printed Keys on White - Apple Store (U.S.) 
    Does one think that Apple sells these only for use when the Notebook is off??? http://store.apple.com/us/product/TW...co=MTM3OTUwMDE Closing the lid, whilst operating the unit actually results in cooler operation due to reduction in power consumed by operating the display, which in supporting the on-board display generates heat far greater than simply powering the video port.. 
    Does one think that Apple sells these for use only when the MacBook is off? http://store.apple.com/us/product/TW...co=MTM3OTUwMDEApples direction for use is specifically for when using with an external display.
    Currently using:
    Apple MacBook Air 13" mid 2011 1.8GHz Core i7 4GB 256 GB SSD Lion & Ubuntu Linux via Fusion
    MacBook Air 11.6" late 2010 1.6 GHZ, 320M, 4GB 180 GB SSD Upgraded OWC), OS X Lion
    11-08-2010, 02:34 PM
      #7 (permalink)
    ajreynol 
    Notebook Virtuoso
    Join Date: Mar 2009 
    Location: Ann Arbor, MI
    Posts: 2,542
    Rep Power: 18
    Re: Is it safe to use MBP with screen lid closed? Leave charge on always?
    once again, I've tried a few of these keyboard covers. the Moshi keyboard cover is the ONLY one I can recommend. the others are too thick or change the keyboard experience too much.
    17" Apple MacBook Pro | i7 2720m | 160GB SSD + 750GB | 16GB | HD 6750M 1GB
    Dell 435MT | i7 920 | 10GB RAM | 7.64TB HDDs | HD 6970 | Win7+SL
    HP Elitebook 2710p Tablet PC | 1.8GHz C2D | 4GB RAM | 160GB HDD | X3100
    Apple iPhone 4 32GB | Apple iPad 64GB (Gen 3)
    Stop random laptop wakeup | 5K500.B bench data | How to Disable PowerMiser
    Disable Vaio beep when pressing volume or special keys
    11-08-2010, 03:34 PM
      #8 (permalink)
    doh123 
    Without ME its just AWESO
    Join Date: Feb 2009 
    Posts: 3,282
    Rep Power: 22
    Re: Is it safe to use MBP with screen lid closed? Leave charge on always?
    1. Closing the cover will cause more heat. This is not because of covering the keyboard very much (though it does help some heat be retained). It's mainly because of the shape of the hinge and the fact when closed it covers up the back vent a lot more. For the best cooling, it is best to have the screen open. Just run it as a dual monitor, but make the external the Primary monitor, and if you don't want to use the built in, just turn its backlight off and don't use it.
    2. The thing you plug into the wall is not a battery charger. The actual "charger" is built into the computer. It knows when to charge and when not to. If the little light on the power plug is amber, then its charging your battery. When its green, its just powering the laptop and NOT charging your battery at all.
    Mac OS X Gamer/Porter
    (We do exist!)
    Wineskin 2.5 is available!. Turn Windows apps into Mac apps for free!
    11-08-2010, 03:38 PM
      #9 (permalink)
    Wolfpup 
    Notebook Virtuoso
    Join Date: Jun 2007 
    Posts: 3,871
    Rep Power: 28
    Re: Is it safe to use MBP with screen lid closed? Leave charge on always?
    I'd leave the lid open, at least partially...yes it may be fine not doing that, but you are making heat dissipate worse, of course could even theoretically hurt the screen.
    As for the battery...well there's really only two choices, have it plugged in or not. As others mentioned, you can't overcharge the battery. It can be damaged a bit from heat, but of course the number one thing that's going to damage it is discharging it...so it's a no brainer-use it plugged in whenever possible, and try to charge it whenever possible when it's not plugged in.
    11-08-2010, 03:45 PM
      #10 (permalink)
    SP Forsythe 
    Notebook Evangelist
    Join Date: Jul 2007 
    Location: California
    Posts: 660
    Rep Power: 14
    Re: Is it safe to use MBP with screen lid closed? Leave charge on always?
    Quote: 
    Originally Posted by doh123
    1. Closing the cover will cause more heat. This is not because of covering the keyboard very much (though it does help some heat be retained). It's mainly because of the shape of the hinge and the fact when closed it covers up the back vent a lot more. For the best cooling, it is best to have the screen open. Just run it as a dual monitor, but make the external the Primary monitor, and if you don't want to use the built in, just turn its backlight off and don't use it.
    Nope. Apple would disagree with you on that one. Any "closure", which is, when compared to the amount of CFM, is insignificant. In fact, the opening size remains the same. it is only deflected at a slight angle when the lid is closed. Tilt your MBP and see. As well, shutting down the display lowers the heat being generated, even in the lower case. As I said, if it were a problem, Apple would not be selling stands designed to operate your unit in the closed position as the original poster of this thread proposes to do.
    Currently using:
    Apple MacBook Air 13" mid 2011 1.8GHz Core i7 4GB 256 GB SSD Lion & Ubuntu Linux via Fusion
    MacBook Air 11.6" late 2010 1.6 GHZ, 320M, 4GB 180 GB SSD Upgraded OWC), OS X Lion

  • Cross Domain Connection Issues with Test Controller

    I am having trouble resolving a problem I have connecting a build agent with the test controller in a cross domain environment.  I have purged out the actual machine names, domain names, and IP addresses just in case that is a security concern. 
    Situation is this:
    All machines are running Visual Studio 2013
    Test controller/agents are on Windows Server 2012 R2
    Test controller is installed as stand-alone in order to be able to do load testing, as well as API and CodedUI.
    Build definition in TFS kicks off the automation using testsettings file to point to build controller
    Application under test uses resources in the ABC.XYZ domain.  Test agents need to be in ABC.XYZ in order to test application E2E.
    TFS is in Main.corp.company.com domain.
    Test controller is a dual-homed box in corp.company.com and ABC.XYZ domains.  It are accessible from Main using the corp.company.com NIC.
    All our dual homed boxes are set up this way.  Dual homed with Main and ABC directly is considered a security violation.
    From the dual homed box, logged in with my ABC credentials, I can access TFS in Main using my Main credentials.
    Manually, I can successfully kick off a test run from a command line from a VM in ABC.
    Build controller and build agents are in Main.corp.company.com.
    Build controller can successfully connect to build agent, and build agent successfully builds the automation.
    Build agent fails to connect to build controller:
    Failed to queue test run 'buildagent@MachineOne 2014-08-12 12:35:34_Any CPU_Debug': No such host is known
    I can ping the build controller from the test agent, and I can successfully query port 6901:
    Querying target system called:
    testcontroller.corp.company.com
    Attempting to resolve name to IP address...
    Name resolved to 10.10.10.111
    TCP port 6901 (unknown service): LISTENING
    Firewall is turned off on the test controller.  Even if it wasn’t, the relevant rules allowing port 6901 and File and Printer Sharing are created.
    Local Security Policy | Security Options | Network access: Sharing & security model = classic
    NETBIOS names of the test agents and build agent are set in the test controller’s hosts file (they were pingable without this anyway)
    NETBIOS name of the test controller is set in the test agent’s hosts file (it was pingable without this anyway)
    Tried both simple NETBIOS name and FSDN for test controller in testsettings file
    Considering installing a build agent on the same machine as the test controller, but suspect that would just move my communication problem to build controller : build agent
    Considering moving test controller to Main and making the four test agents dual-homed, but there is a concern to limit the number of dual-homed boxes, and also suspect that would again just move the communication problem.
    I can use netstat to verify that the service is listening to port 6901 on both NICs:
    TCP    0.0.0.0:6901           0.0.0.0:0              LISTENING       6536
    TCP    [::]:6901              [::]:0                 LISTENING      
    6536
    (PID 6536 is the QTController.exe)
    However the VSTTController.log only mentions listening to the ABC NIC.  Since the connection to the ABC test agents works, that makes sense.
    When I open the testsettings file on my laptop in the Main domain and examine the server name, there is a warning that the host cannot be found.  When I open it on a VM in the ABC domain I am able to manage the test controller and view all the test
    agents.  However, if I try to restart the build controller I get an access denied error.  Not sure if that is related in some way.
    I am using a ABC domain service account to run the test agent sevice.  There is a Main domain service account running the build.  Both service accounts are administrators on the test controller and in the TeatTestControllerAdmins and TeatTestControllerUsers
    groups.  The test agent service account is also in the TeamTestAgentService group.
    I tried to create a port proxy to forward requests from the Main facing NIC to the port on the ABC facing NIC:
    netsh interface portproxy add v4tov4 listenport=6901 listenaddress=10.10.10.111 connectport=6901 connectaddress=10.20.20.222
    This almost worked.  I could see with netstat commands that the port was opened and a connection was established with the build agent,  however after a long wait it hit an error that it couldn’t find the ABC NIC:
    A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 10.20.20.222:6901
    So apparently the ABC IP is getting forwarded back to the test agent, which then of course can’t use it.
    I am running out of ideas to try.  Not sure where the problem is:
    Cross-domain issue between Main.corp.company.com and corp.company.com?  Or,
    Problem with the test controller not being able to listen on more than one NIC?
    I know I am not the first person to try to set up something cross domain.  Most of the troubleshooting suggestions I have been able to bing have been about fixing connections between test controller and test agents, which isn’t the problem here. 
    Is this set up just so far from standard that VS can’t handle it?
    Thanks in advance,
    Gary

    Hi Gary,
    Thank you for posting in the MSDN forum.
    >> Build agent fails to connect to build controller: Failed to queue test run 'buildagent@MachineOne 2014-08-12 12:35:34_Any CPU_Debug': No such host is known
    >> I know I am not the first person to try to set up something cross domain.  Most of the troubleshooting suggestions I have been able to bing have been about fixing connections between test controller and test
    agents, which isn’t the problem here. 
    Just to make this issue clearly, you mean that it is not the Test Controller and Test Agent issue, am I right?
    As you said that it is related to the Build controller and build Agent, am I right?
    If it is related to the Build Controller and Build Agent, I suggest you post this issue to the TFS forum, there you would get dedicated support.
    http://social.msdn.microsoft.com/Forums/vstudio/en-US/home?category=vstfs&filter=alltypes&sort=lastpostdesc
    If there's any concern, please feel free to let me know.
    Sincerely,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place. <br/> Click <a
    href="http://support.microsoft.com/common/survey.aspx?showpage=1&scid=sw%3Ben%3B3559&theme=tech"> HERE</a> to participate the survey.

  • Issues with Small Business Switch config

    Hi, I know that if I read the documentation I will come to the answers but I would really like some input from someone with more knowledge than me. I have an issue with Cisco SF300 , one of the Small Business Switches. I have a single interface on my router and I need to separate my internal networks , I thought that one way would be to use VLANs. On my two internal networks one network only has unmanaged D-Link switches, the other has the Cisco SF300 so I did as follows. 
    On the Cisco Switch, all ports default to Trunk ports. I have changed FE1-FE24 and GE1-2 to access ports.
    Created two VLANS and placed FE1-FE24 in VLAN10 (also my management VLAN) , GE3 is a trunk Port for VLAN20 untagged, VLAN 20 uplinks to my DiLink Switches. This way traffic from my unmanaged switches comes in on a trunk port untagged on VLAN20.
    GE4 is a trunk port and I have assigned  VLAN1 untagged, VLAN10 tagged and VLAN20 tagged. VLAN 10 and VLAN 20 then forward to my Router.
    The plan was to connect GE4 to my router however I had two things happen I can not explain.
    Firstly as soon as I connected my D-Link to GE3 the LAN on VLAN20 went down, I could not ping Servers from PCs etc, all devices are connected to the unmanaged D-Links. Secondly the VLAN Assigment changed on GE3 and GE4 , VLANs 10 and 20 disappeared and only the default VLAN was assigned, also under VLAN Settings my VLAN interface state for VLAN20 shows Disabled. Also one of my access ports FE12 keeps changing VLAN.
    Can anyone offer any suggestions as to what might have crashed the LAN and why my VLANs change. I did write my running config to the start up config by the way.
    I added two screen shots. 
    I would seriously appreciate some help.
    Thanks 
    Bob

    Hi Garrett, thanks for your reply to my post, I hope you are well. I called Cisco support, they told me that they could not understand why this was happening and suggested a firmware upgrade, usually something I should have considered right from the beginning. This solved the issue for me.
    Thanks
    Bob

  • Issues with running datapump export

    Hi all!
    I am running in to an issue with data pump exports on 6 servers. They are running the following:
    Windows 2008 server R1 standard
    Oracle Enterprise Edition 11.1.0.7
    I am entering the following:
    expdp directory=DIRECTORY dumpfile=TEST1.dmp schema=ORCL
    I get the following error when doing the export:
    Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit
    Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "PSSBACKUP"."SYS_EXPORT_SCHEMA_05": pssbackup/******** directory=PSS_B
    ACKUP dumpfile=testing2345.dmp schemas=cuba
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 481.6 MB
    Processing object type SCHEMA_EXPORT/USER
    Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
    Processing object type SCHEMA_EXPORT/ROLE_GRANT
    Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
    Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
    ORA-39014: One or more workers have prematurely exited.
    ORA-39029: worker 1 with process name "DW01" prematurely terminated
    ORA-31672: Worker process DW01 died unexpectedly.
    Job "PSSBACKUP"."SYS_EXPORT_SCHEMA_05" stopped due to fatal error at 14:24:16
    Oracle support was brought in on this but they were not able to figure it out. Can anyone help out?

    I got this from the trace file:
    *** 2010-10-01 14:23:10.760
    Exception [type: ACCESS_VIOLATION, UNABLE_TO_WRITE] [ADDR:0x0] [PC:0x2CFAF23, kuxgetHashCode()+121]
    Incident 60413 created, dump file: D:\APP\PSSADMIN\diag\rdbms\pss2437\pss2437\incident\incdir_60413\pss2437_dw01_3740_i60413.trc
    ORA-07445: exception encountered: core dump [kuxgetHashCode()+121] [ACCESS_VIOLATION] [ADDR:0x0] [PC:0x2CFAF23] [UNABLE_TO_WRITE] []
    I haven't been able to pull up any information indicating how to fix this error. I'm not sure where the [UNABLE_TO_WRITE] error is referring to.

  • Issues with running Server 2012R2 Essentials as an Offline Standalone Root CA?

    Hi everyone,
       I've searched this forum as well as Google and have not been able to find any concrete answers so I am hoping one of you experts will be able to assist me.  
       I have an all Windows 2008 server/domain enviroment.   I was looking at implementing a two-tier PKI with an offline, standalone root CA and an enterprise issuing CA (2008 member server).   Budgets are tight so I was hoping
    I could get away with using Server 2012R2 Essentials as the offline standalone root.  
       It is my understanding that 2012 Essentials is configured as a DC by default and needs to remain that way per licensing.   I know the recommended configuration for an offline root would be to have the server be in a workgroup and not
    on a domain.
       So question 1 is will 2012 Essentials work as an offline standalone root?
       Question 2 is will there be any issues with it running as described?   In other words will the fact that it is the sole DC in its own domain cause issues with its use as an offline root?  
       Thank you in advance for your help!

    The essentials experience role runs on server standard and is very different from the essentials product in both licensing and pricing. While you can indeed buy standard and deploy the essentials experience role as a "standalone" server, there
    would be ZERO benefit in a PKI offline root scenario the essentials role has no automation or configuration options in the dashboard for that use case, and pricing, you'd still be paying for server standard.
     The essentials product (or SKU) has the benefit of reduced cost, but cannot be deployed standalone and has enough other restrictions that it is not well suited for the given desired use.
    So either way, my answer stands. Essentials (prodct it role) is not the right tool for the job in this case.

  • Issue with running Lexmark Prestige Pro805 printer with iMac running 10.3.9

    Thought I'd share this with you in case anyone else runs into this problem.
    Recently bought a Lexmark Prestige Pro 805 printer to service a home network. Had no issues with the printer using 10.6 on MacBook Pro, 10.5 on Intel iMac and also running on Laptop with Windows XP all running off same network via USB connection into a Time Machine.
    However when tried running this off an iMac PPC using 10.3.9 although I could print after initial install subsequent prints after logging off and on hit the problem of "Unable to read CUPS raster data".
    Lexmark driver updates and firmware updates failed to fix issue and contacted Lexmark who advised me to do a complete uninstall and re-install but to same effect. They then determined this was a CUPS issue and I should contact Apple. Looked on this forum and found a message posted in Nov 2005 which suggested doing a disk repair off the original install disk in order to repair corrupted CUPS directories (simple to say after 6 years - where exactly did I put those disks?).
    Anyway did this and indeed both permissions and disks needed repair and it fixed the issue.
    As it took me a while to get to the answer hopefully by posting this if anyone else comes across this issue they can save some time!

    Thanks for posting your solution. May help another Mac user in the future.
     Cheers, Tom

  • Issue with running redistributeactivedatabases.ps1 in new-pssession

    Hi,
    I am stuck with the issue to run redistributeactivedatabases.ps1 in new-pssession.
    Is there any best pratice or some other script to redistribute the mailbox databases using new-pssession

    Hi Jrv,
    I am sorry for not being very clear.
    Also i know what i am trying is not very sane but i cannot help it as i got order that i have to achive this.
    Below are the exact suitation.
    We have JP1 server (Automation) which we use to reboot/failover/failback.
    I have to automate failover and failback scripts. I am able to achive failover part as it is simple command
    Move-ActiveMailboxDatabase -server servername
    and it is working.
    For failback and automatic rebalance mailbox database according to the activation preference i am using below script which i have to run using cmd.
    $mbxs =
    Get-MailboxDatabase
    | sort name
    ForEach($mbx
    in $mbxs){
        $MBdb=$mbx.Name
        $ServerHosting=$mbx.Server.Name
        if($mbx.activationPreference.value
    -eq 1)
            If
    ($ServerHosting -ne $ActivationPreference.Key.Name){
                Move-ActiveMailboxDatabase $MBdb
    -ActivateOnServer $ActivationPreference.Key.Name
    -confirm:$False
    But the problem i am getting is that script is not working when i am running this in cmd or in powershell.
    But if i run the same script on Exchange server using Exchange Management shell then there are no issues.
    So, i need your help so that i can run above script using cmd in new-pssession.
    Hopefully i am able to make some sense now.

  • Are there issues with running CS 6 on Windows 8? (NOT the CC edition)

    I'm looking at getting a new computer and am wondering what issues I'll have using my CS6 suite if I get a Windows 8 machine. I know that the Creative Cloud versions are compatible, but what about the 'old' software?

    So, it seems my issue with the Limited Internet connection has been resolved. I didn't use my computer for two days and this afternoon when I turned it on I was still having issues. However, I went into the HP Support Assistant and it popped up that I had some updates to download and install, so I let it do its thing. It took a while because the Internet kept cutting out, but once the update for the Intel Wireless finally installed, my connection has STAYED CONNECTED. Finally!
    I don't know why when I went into the wireless properties it said "device is working properly" and when I tried to Update the driver it said "your driver is the latest version" or whatever.
    I suggest you go to the HP Support Assistant and check for updates in the "Updates and Tune-Ups" section. It was kind of random, but knock on wood my Wireless connection issue looks to be gone.
    Hope this helps
    P.S.
    Here's a screen shot of what the HP Support Assistant (it is the one with the big question mark logo) main screen looks like :

  • Issues with SCOM 2012 Agent on Red Hat 5 Server

    We are running SCOM 2012 server and have deployed the agent successfully to a number of Red Hat Linux servers. I am having an issue on about a quarter of the hosts, in that they appear as HEALTHY but are Grayed out and not green. When I look at the /var/opt/microsoft/scx/log/omiserver.log
    file I see:
    WARNING: wsman: authentication failed for user scom2012
    I have verified that the system account is setup with the correct password and the runas account is setup with the correct password (i am able to deploy the agent from the SCOM server using it, so the passwords DO match).
    Any ideas? 

    We are running SCOM 2012 server and have deployed the agent successfully to a number of Red Hat Linux servers. I am having an issue on about a quarter of the hosts, in that they appear as HEALTHY but are Grayed out and not green. When I look at the /var/opt/microsoft/scx/log/omiserver.log
    file I see:
    WARNING: wsman: authentication failed for user scom2012
    I have verified that the system account is setup with the correct password and the runas account is setup with the correct password (i am able to deploy the agent from the SCOM server using it, so the passwords DO match).
    Any ideas? 
    I've seen this on a few systems here when the agent has been upgraded but the old agent process does not die off.  Just to rule it out, pick a node, make sure there are no instances of scxcimserver or scxcimprovagt and then start the agent and
    see if the issue goes away.  I've also seen wsman authentication failures related to the libssl issue that was fixed in yesterday's release.

  • Issue with running QuickTime Windows.  Buffer Overrun Error - C++ Library .

    Initial problem was Buffer Overrun Error (C++ Library) when clicking on QuickTime after installation. IE. QT would not even open. http://support.microsoft.com/kb/831875#appliesto
    I took these steps:
    1. Tried to uninstall QuickTime by itself (it failed).
    2.
    3. Manually deleted apple, itunes and quicktime from the entire system (where ever it let me).
    4.
    5. Manually took out from the registration the apple stuff.
    6.
    7. Left the items in the recycle bin (in case there were any real issues and I needed something restored).
    8.
    9. Performed a registration cure (RegCure).
    10.
    11. Took off my entire Anti Virus.
    12.
    13. Dropped down separately the QuickTime and separately the iTunes on desktop.
    14.
    15. Tried to install QuickTime from one of the two saved files on my desktop, but encountered a serious fault
    16.
    17. It needed the QuickTime Installer to remove QuickTime itself, else it crapped out and nothing happened. This complained for a QuickTime.msi file which was a problem.
    18.
    19. Went to the recycle bin and restored only components which were marked QuickTime Installer.
    Removed the QuickTime instead of Repair.
    Went to the website and Installed QuickTime 7 directly.
    It opened on the desktop after installation.
    Installed iTunes separately from the desktop and it opened directly.
    Rebooted my pc.
    Enabled all my Security (McAFee).
    Opened one by one the QuickTime and Then the iTunes.
    Created a computer restore point with a narrative for the future.
    This was a very difficult task and required a lot of steps. I am glad you helped me with the removing part. Its great to have everything working again on my pc.
    I hope this was helpful - it took me ageges to fix.

    I'm experiencing exactly the same bug. Matter of fact, it's the first time in years that I've run across this kind of 'problem' when using non-beta software from a major player. Too bad. This really reflects poorly on Apple's credibility.

  • Strange issues with domain controller/DNS server

    Our domain controller/DNS server was working fine this morning. Then suddenly we stopped being able to access certain things on it. I could ping it, RDP into it, and access some files on it, but I couldn't run any applications hosted on it, accessing shared
    network files was slow, and different people around the office were getting access denied errors to files and folders they had full control of in NTFS (and in shared permissions).
    At first I noticed an NTP error so I registered w32tm and started the service and that got rid of the error but didn't fix anything.
    Oddly, machines still had internet access.
    We tried rebooting everything, restarting services, nothing has helped.
    When I accessed the server directly through the console I could access everything, could connect to any machine in the office, nothing seemed to be wrong with it.
    Any ideas?

    Is there any recent changes in your network or firewall or antivirus? Is there any change/updates performed in the AD side? I would suggest find out changes being done at the AD or Network/FIrewall level. You can run various diagnostic test within your AD
    environment to find the overall health of the AD infra.
    What does DCDIAG actually… do?
    Active Directory Replication Status Tool Released 
    http://msmvps.com/blogs/ad/archive/2008/06/03/active-directory-health-checks-for-domain-controllers.aspx
    Awinish Vishwakarma - MVP
    My Blog: awinish.wordpress.com
    Disclaimer This posting is provided AS-IS with no warranties/guarantees and confers no rights.

  • X58 pro i7 pci express 2.0 issue with sata controller

    hello
    i installed rosewill rc-225 (marvell controller) pci express 2.0 x1 card. my crucial m4 ssd maxes out at 271MB/s
    it is installed in the x16 slot. i tried gen 1 x1 slot i was getting pathetic 130MB/s what is going... on new egg people with this drive and the controller are getting more than 400MB/s
    btw i overclocked pci express and it improved the speed i dont get it what is going on...

    Quote from: brasstech on 12-September-11, 09:33:56
    Running Windows 7 Ultimate 64Bit, Video card is EVGA GTX295, Marvell 91xx Sata 6G Controller Driver is 1.0.0.1034.
    Oh wow that is crazy.
    So there are only 2 explanations left since I only have GTX260.
    1. I have some fluke Rev 0 motherboard.
    2. Marvell chip doesn't like Crucial SSDs. Which I did hear kind of on Crucial forum. (I am thinking this).
    To finally conclude it is Crucial SSD, can you take me which revision/model of motherboard you have? I think i seen it on a corner of the motherboard printed. Or in the bios.
    I'll get mine from the BIOS tonight.
    Also I wonder if overclocking the CPU increase the speed.... I asked brasstech if he overclocked... I hope he did, that would explain things.

  • Issue with running reports on the portal

    Hello Guyz
    1. I have a question regarding running reports on the portal.
    2. I have standard web templates that have been installed in BI.
    3. But in the portal, we are not able to run these reports.
    4. Do we need to create custom I-Views or can we install them from business content?
    5. Can someone explain the process to me AND also any other issues that we might have?
    Thanks.

    Hi Srinivas,
    Standard web templates doesnot get installed with attached queries.Create custom web templates in WAD and attach ur queries based on the type of display u like either tabular or graphical.In order to diplay in portal you have to attach webtemplates to iViews and then group iViews to worksets and attach to portal roles.Then add those roles to portal users.From BI Prospective u just create web-templates and give technical names to EP consultants if you have any then they can takecare of rest.If you want to call standard web template 0ANALYSIS_PATTERN from portal you need to have Business Explorer role added to ur user in EP.Using that template u can only open one query at a time and execute.Hope its clear
    Chandu

Maybe you are looking for