Booting to maintenance mode

Hi
still being new to solaris10 i,m having probs, i had the same prob b4 solaris not booting up so reinstalled again, booted ok, tried it a couple of times went to on to day wont boot up hav followed wot screen says wih tryin different ways to boot cant get any further than maintenance mode hav got into failsafe but cannot do anything, tried running "svcadm clear system/boot-archive etc

Hi Hitesh,
I tried it but the problem is if i logged in on maintenance mode, i can't execute the svcadm
(not only this, any of the commands like vi, ls etc)
What i found is the /usr is not mouting properly in maintenance mode.
hence it is not able to go to sbin and hence all the executables.
and on failsafe mode, i can't clear it also.
# svcadm clear system/boot-archive
svcadm: Instance "svc:/system/boot-archive:default" is not in a maintenance or degraded state.
jj

Similar Messages

  • Machine booting up to maintenance mode only

    Hi guys,
    I have a problem with one of my Solaris 10 Servers. The issue is it is always going to maintenance-mode.
    I'm getting the follwoing message while botting up.
    +\+
    WARNING: The following files in / differ from the boot archive:
    The recommended action is to reboot to the failsafe archive to correct
    the above inconsistency. To accomplish this, on a GRUB-based platform,
    reboot and select the "Solaris failsafe" option from the boot menu.
    On an OBP-based platform, reboot then type "boot -F failsafe". Then
    follow the prompts to update the boot archive. Alternately, to continue
    booting at your own risk, you may clear the service by running:
    +"svcadm clear system/boot-archive"+
    When i logged in failsafe mode, the OS partition is mounted as /a .
    Then i edited vfstab, and then did bootadm as follows
    bootadm update-archive -R /
    But after restaring the machine goes to maintenance mode.
    Any clue to get rid of it ?
    Thanks in advance..

    Hi Hitesh,
    I tried it but the problem is if i logged in on maintenance mode, i can't execute the svcadm
    (not only this, any of the commands like vi, ls etc)
    What i found is the /usr is not mouting properly in maintenance mode.
    hence it is not able to go to sbin and hence all the executables.
    and on failsafe mode, i can't clear it also.
    # svcadm clear system/boot-archive
    svcadm: Instance "svc:/system/boot-archive:default" is not in a maintenance or degraded state.
    jj

  • 5508-HA standby in Maintenance mode

    My standby controller is in maintenance mode. Other post say to simply reboot the standby but I'm worried about doing this during business hours. Say I did reboot it during business hours, would it affect the active controller? All redundancy links are connected.
    (Cisco Controller) >show redundancy sum
     Redundancy Mode = SSO ENABLED
         Local State = MAINTENANCE
          Peer State = UNKNOWN - Communication Down
                Unit = Secondary - HA SKU
             Unit ID = 00:06:F6:DC:17:00
    Redundancy State = Non Redundant
        Mobility MAC = 68:EF:BD:8E:61:E0
    Maintenance Mode = Enabled
    Maintenance cause= Negotiation Timeout

    No it won't affect the active controller:
    While booting, the WLCs will negotiate the HA role as per the configuration done. Once the role is determined, the configuration is synced from the Active WLC to the Standby WLC via the Redundant Port. Initially WLC is configured, as Secondary will report XML mismatch and will download the configuration from Active and reboot again. During the next reboot after role determination, it will validate the configuration again, report no XML mismatch, and process further in order to establish itself as the Standby WLC
    http://www.cisco.com/c/en/us/td/docs/wireless/controller/technotes/7-5/High_Availability_DG.pdf
    https://supportforums.cisco.com/discussion/11758901/ask-expert-high-availability-wireless-lan-controller-wlc

  • Boot to safe mode

    hello. i am trying to perform a long-overdue system clean-maintenance on my macbook running osx 7.5 and i can't boot to safe mode using the shift key. i am sure that i have tried every possible permutation of holding down the key during startup. i seem to remember an apple genius using a key combo -- control-7? - to startup in safemode when i had a problem way back... thoughts please. thanks so much!    

    Plain old shift key is safe mode.  Boots with much fewer drivers and no startup items.
    command-S is to do a file system directory cleanup via command line /sbin/fsck -fy
    Option key gives you the option to go to the Lion Recovery partition, and run Disk Utility to do the same thing as /sbin/fsck -fy in Repair Disk or reinstall Lion, which I would not do unless you have troubleshooted all other possibilities of you are having problems.  Either way, a backup is important to do before anything else.  See my tip:
    https://discussions.apple.com/docs/DOC-1992
    Do not under any circumstance run a System cache cleaner, or MacKeeper.   These are much more pain to remove their side effects than they are worth.

  • Unable to enter maintenance mode - dtrt1000

    Over the last few weeks my box is almost useless.  With frequency it only part records, freezes requiring rebooting, momintary screen blanks, and is generally a pain in the rear.  I've tried to get into maintenance mode to try and reset it, but following the guide from Youview it doesn't work, and the box just starts as normal.
    My understanding is that you (1) Switch the box off from the back (2) Switch it back on after 30secs (3) Press the power button and immediately hold of the -vol button until the splash screen appears...Then you can access the menu.
    I've tried holding the holiding the -vol button before the power button, holding it immediately after power button, holding it until the nearly ready screen, in low and high eco modes etc but get the same result, it just goes straight into normal programes???

    1 Start with the YouView box powered off from the switch on the REAR panel power button
    2. Power back on the YouView box using the REAR panel power button
    3. When the FRONT power button is illuminated with an orange circle, press the FRONT panel power button firmly once and it will turn blue 
    4. Immediately press and hold the "VOL-" button which can be found on the right of the FRONT panel the first silver button. 
    5. A message saying "Enter Maintenance Mode Y/N (Y: POWER)" appears on the TV screen
    So if I understand correctly your experience is that steps 1, 2 & 3 occur but when doing step 4  the message step 5 (does not occur ) and your box boots as normal.
    My own (historic ) experience is that the timing of stages 3 and 4 is quite tight - ie the power button turning blue followed immediately by the VOL- button being pressed and held.
    Your options if the maintenence mode will not  work are to contact BT support and seek their advice or if you are not concerned about the recordings remaining on the box you can do a factory reset from from main Youview menu
    https://community.youview.com/youview/topics/top_tip_soft_reset_reboot_power_cycling_maintenance_mod...

  • Solaris 10 disk mirror partition goes in maintenance mode at reboot

    Hello
    I have got solaris 10 installed on a Sun machine with 2 disks mirrorred to each other. There also security toolkit 4.2 installed. Now every time the system reboots and I do a metastat the mirror partitions goes in maintenance mode and I have to individually metasync the mirrors after every reboot.
    I guess this due to the security toolkit playing up. Would really appreciate any help to sort this out. The mirrors should automatically resync after system reboot.
    Thanks in advance.
    Pioneer

    Hi yes I did run the metaroot. If I manually metasync its all OK. My problem is the partition does not auto sync after the system boots.
    I guess this is someting to do with the security toolkit 4.2 playing up not disabling some services at boot. Have any one faced this issue ?
    Many Thanks
    Pioneer

  • HELP: WLC AP-SSO not working (standby unity in maintenance mode)

    I have two WLC version 7.3.101.0 with the standby unit having HA-SKU. I have tested the AP-SSO functionality without any problem in lab with direct connection on RP port between two WLC. Once I brought them into data centre in separate location (latency is less than 10ms between the two DC), the standby unity always went into maintenance mode. The booting process on standby unit went to maintenance mode as shown below:
    Management Gateway and Peer Redundancy Management interface are not reachable.
    Entering maintenance mode...
    I have checked on the core switches at 2 data centre that the two WLC RP ports are connected to same VLAN and it is spanned across MAN link (10GB and less than 10ms delay). The spanning tree on those ports are forwarding as well.
    I have rebooted the second unit but no luck.
    The interface between two DC is using MTU 9216 which I do not think would cause this issue.
    Anyone has come across same or similar issue with me or know the solution? If you do, plz enlighten me.
    Thanks

    Thanks Leo and Scott for your feedback. I notice there are two newer software for WLC version 7.3.102.0 and 7.4.100.0.
    Both of them seem to have many open caveats. In my wireless environment, I also use ISE, MSE and Prime Infrastructure and unfortunately WLC 7.4 does not support prime solution and MSE yet according to below compatibility matrix.
    http://www.cisco.com/en/US/docs/wireless/controller/5500/tech_notes/Wireless_Software_Compatibility_Matrix.html
    I think I only have choice to do minor upgrade to 7.3.102.0 at this moment (please correct me if I am wrong). This software was published on 30th Jan 2013 so I wonder if someone else has tried this and managed to get WLC AP-SSO setup working flawlessly where 2nd WLC unit is at different location?
    Appreciate for more info and advise.

  • X86 sc3.1-0805 sol10-0606 - Doesn't boot in cluster mode

    Hi,
    i'm at my first experience with Sun Cluster on x86.
    I've already tried at home with two p4 whiteboxes and now repeating the experiment here at work with a similar conf.( i happily run 6 v490 in 3 clustered pair with 3510Fc and a test 2 u10 clustered pair with Multipack).
    no matter what i try i ever end up with the same results:
    nodes boot up outside of the cluster. Interconnects doesn't start and (i think maybe cause of that) global devices don't get initialized.
    I already tried many reinstall, already tried to add etc/cluster/nodeid to the filelist.ramdisk and update boot archive e reconfigure, like described in an infodoc to workaround a well known problem, but nothing changed
    This is the situation as i start either one of the nodes:
    mordor-nodo2 # svcs -x
    svc:/system/cluster/mountgfsys:default (Suncluster mountgfsys service)
    State: maintenance since Tue Aug 08 15:37:51 2006
    Reason: Restarter svc:/system/svc/restarter:default gave no explanation.
    See: http://sun.com/msg/SMF-8000-9C
    See: /var/svc/log/system-cluster-mountgfsys:default.log
    Impact: 13 dependent services are not running. (Use -v for list.)
    svc:/system/cluster/gdevsync:default (Suncluster gdevsync service)
    State: maintenance since Tue Aug 08 15:37:51 2006
    Reason: Restarter svc:/system/svc/restarter:default gave no explanation.
    See: http://sun.com/msg/SMF-8000-9C
    See: /var/svc/log/system-cluster-gdevsync:default.log
    Impact: 13 dependent services are not running. (Use -v for list.)
    svc:/network/multipath:cluster (Network Monitor Daemon)
    State: maintenance since Tue Aug 08 15:37:39 2006
    Reason: Maintenance requested by an administrator.
    See: http://sun.com/msg/SMF-8000-63
    See: in.mpathd(1M)
    See: /etc/svc/volatile/network-multipath:cluster.log
    Impact: This service is not running.
    Only the public interface is up in sc_ipmp0 group
    Cluster interconnects are 3com elxl interface in all two nodes and are connected with cross-cables elxl0->>elxl0 elxl1-->elxl1 (verified that it works)
    I've removed switches and put cross-cables while troubleshooting to have a simpler setup.
    /etc/vfstab - every fs is mirrored - metadb are in s7 , globalfs in s3
    /dev/md/dsk/d0 - - swap - no -
    /dev/md/dsk/d10 /dev/md/rdsk/d10 / ufs 1 no -
    /devices - /devices devfs - no -
    ctfs - /system/contract ctfs - no -
    objfs - /system/object objfs - no -
    swap - /tmp tmpfs - yes -
    #/dev/md/dsk/d20 /dev/md/rdsk/d20 /globaldevices ufs 2 yes -
    /dev/md/dsk/d20 /dev/md/rdsk/d20 /global/.devices/node@1 ufs 2 no global
    Planning to add a multipack for the multihost disks. But i'd like to solve this problem before
    Nothing useful appears on the logs.
    I only got the initial
    Not booting in cluster mode
    and nothing more
    Maybe missing something about versions of the software/hardware am using that for any reason can't work togheter?
    or some fix is needed?
    Any hint would be appreciated
    I stay at disposal for any kind of info
    thanks

    Hi,
    you should elaborate a bit on your hardware.
    Is it x64 or x32, what is your shared storage.
    The "not booting in cluster mode" appears in example if you want to install SC 3.1 and SC3.2 on x32 hardware. If this your goal, you should start with Solaris Express and Solaris Cluster express.
    Kind regards
    Detlef

  • Requesting System Maintenance Mode

    I just installed Solaris 10 on a SunFire v100 and patched the os. I added a second drive and mirrored it. Now I get this message 2 out of 3 times when I reboot the server.....
    LOM event: +36d+22h57m55s host reset
    Sun Fire V100 (UltraSPARC-IIe 548MHz), No Keyboard
    OpenBoot 4.0, 1024 MB memory installed, Serial #57166297.
    Ethernet address 0:3:ba:68:49:d9, Host ID: 836849d9.
    Executing last command: boot
    Boot device: rootdisk File and args:
    SunOS Release 5.10 Version Generic_118822-27 64-bit
    Copyright 1983-2005 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Hostname: gandalf
    Jan 31 17:41:02 svc.startd&#91;7&#93;: svc:/system/boot-archive:default: Method "/lib/sv<br />
    c/method/boot-archive" failed with exit status 1.
    Jan 31 17:41:02 svc.startd&#91;7&#93;: svc:/system/boot-archive:default: Method "/lib/sv<br />
    c/method/boot-archive" failed with exit status 1.
    Jan 31 17:41:02 svc.startd&#91;7&#93;: svc:/system/boot-archive:default: Method "/lib/sv<br />
    c/method/boot-archive" failed with exit status 1.
    &#91; system/boot-archive:default failed (see 'svcs -x' for details) &#93;
    Requesting System Maintenance Mode
    (See /lib/svc/share/README for more information.)
    Console login service(s) cannot run
    Root password for system maintenance (control-d to bypass):
    Any clue on what I should do?
    Thanks!
    Chris Edwards

    Hi,
    Can you please check the below note
    Solaris11 cannot boot and goes into maintenance ( svc:/system/early-manifest-import:default exited with status 1 ) (Doc ID 1526559.1)
    It has the solution workaround. Kindly let us know in case issue is not fixed
    Thanks,
    Krishna

  • SUNFIRE V880 in maintenance mode

    Hi,
    Sunfire V880 server start to maintenance mode and in run level 6.
    I have try to boot to solais DVD rfrom ok prompt and cannot boot.
    this is imposible to modify in runlevel 6 vfstab or inittab.
    wath can i do please help me.
    thank you

    Hi.
    Please describe current state and what you see on terminal
    Level init 6 - mean reboot the system. So it can not be stable state.
    What error messages you got when try boot from DVD ? What Solaris version you try boot ?
    What Solaris version was installed on server ?
    Regards.

  • FWSM maintenance mode - vlan 1

    Hi,
    A client has had their FWSM fail, when you try to start the module the switch eventually disables the power to that slot (%C6KPWR-SP-4-DISABLED: power to module in slot 4 set off (Module  Failed SCP dnld)). I have turned off diagnostics with 'no diagnostic boot level' and then use 'boot device module 4 cf:1' to bring the FWSM up into maintenance mode. I can then session up from the switch and log in to the FWSM as root.
    After inputting all the necessary IP info I can't ping anything on vlan 1 as I would expect, I have set the FWSM as 192.168.1.2 and a FTP/TFTP server as 192.168.1.1
    I have removed the firewall vlan groups and tried to put them back with just vlan 1 but this isn't accepted (the reasons are covered in other posts on the forum). What am I doing wrong as the instruction say that vlan 1 is the only vlan that is accessable whilst the FWSM is in maintenance mode.
    I can create an int vlan 1 in the switch and ping my ftp server so know that the switchport is set up correctly, I can also see that Po308 is formed and when the module boots I can see the Gi4/xx interfaces come up (FWSM is in slot 4).
    Any ideas of what to try next?
    ............and they aren't covered by maintenance agreements
    FWSM
    Maintenance image version: 2.1(4)
    [email protected]#show images
    Device name             Partition#              Image name
    Compact flash(cf)       4                       c6svc-fwm-k9.3-1-4-0.bin
    Switch
    SWITCH# sh ver
    Cisco IOS Software, s72033_rp Software (s72033_rp-ADVIPSERVICESK9_WAN-M), Version 12.2(33)SXI7, RELEASE SOFTWARE (fc1)
    Technical Support: http://www.cisco.com/techsupport
    Copyright (c) 1986-2011 by Cisco Systems, Inc.
    Compiled Mon 18-Jul-11 05:49 by prod_rel_team
    ROM: System Bootstrap, Version 12.2(17r)SX7, RELEASE SOFTWARE (fc1)
    Regards
    Mel

    Recently i met the same problem.
    When installing FWSM board on the Catalyst 6509 there is not communication access via vlan1 in the maintenance partition.
    Moreover, the FWSM works properly in the aplication partition(cf:4).
    Cisco IOS Software, s72033_rp Software (s72033_rp-ADVENTERPRISEK9_WAN-M), Version 12.2(33)SXH8, RELEASE SOFTWARE (fc1)
    System Bootstrap, Version 12.2(17r)SX5, RELEASE SOFTWARE (fc1)
    Mod Ports Card Type                              Model             
      1   48  48-port 10/100/1000 RJ45 EtherModule   WS-X6148A-GE-TX   
      4    6  Firewall Module                        WS-SVC-FWM-1      
      5    2  Supervisor Engine 720 (Active)         WS-SUP720-3BXL    
      8    5  Communication Media Module             WS-SVC-CMM        
    Mod MAC addresses                       Hw    Fw           Sw           Status
      1  001b.d41a.8360 to 001b.d41a.838f   1.5   8.4(1)       8.7(0.22)BUB Ok
      4  0003.fead.962e to 0003.fead.9635   3.0   7.2(1)       4.1(14)      Ok
      5  0017.9444.c3ec to 0017.9444.c3ef   5.4   8.5(2)       12.2(33)SXH8 Ok
      8  0017.0ee2.13cc to 0017.0ee2.13d5   2.8   12.4(25c),   12.4(25c),   Ok
    FWSM versions
    FWSM Firewall Version 3.2(20)
    Device Manager Version 5.0(3)F
    Not possible to verify the switch is in the service.
    I guess the reason is likely next.
    FWSM supports only untagged packets on the vlan1. By default catalyst 6500 not tagged native vlan1.
    In my case globally enabled tagging  in the native vlan.
    #sh vlan dot1q tag native
    dot1q native vlan tagging is enabled globally
    sh vlan dot1q tag native
    dot1q native vlan tagging is enabled globally
    Per Port Native Vlan Tagging State:
    Port    Operational          Native VLAN
               Mode               Tagging State
    Gi1/2   trunk                 enabled
    Gi1/8   trunk                 enabled
    Gi1/13  trunk                 enabled
    Gi1/14  trunk                 enabled
    Gi1/17  trunk                 enabled
    Gi1/18  trunk                 enabled
    Gi1/21  trunk                 enabled
    Gi1/27  trunk                 enabled
    Gi1/30  trunk                 enabled
    Gi1/32  trunk                 enabled
    Gi1/38  trunk                 enabled
    Gi1/42  trunk                 enabled
    Gi1/43  trunk                 enabled
    Gi1/44  trunk                 enabled
    Gi1/46  trunk                 enabled
    Gi5/2   trunk                 enabled
    Po2     trunk                 enabled
    Po308   trunk                 enabled

  • Maintenance Mode Switch for APEX

    Is there any way that I can throw an switch in APEX and have it allow users who are currently logged in to finish their sessions (maybe displaying an alert message at the top of their pages asking them to finish what they are doing quickly and logout) but display a message denying new users the ability to login and use APEX. What I am after is a way to gracefully clear users off of my app server so I can take it down without giving a grand maul das boot to active users. Kind of the "The store is now closing, please take your final purchases to the registers at this time." type of announcement. Lock the front doors but let people who are already inside finish their business type of result. If this does not exist, it would be a nice feature to have at the internal workspace control level.

    Hi,
    A Windows Schedule task,
    http://technet.microsoft.com/en-us/library/cc748993.aspx
    Please read this one:
    Management Pack for the SCOM 2012 Maintenance Mode Scheduler
    http://blog.tyang.org/2014/05/22/management-pack-scom-2012-maintenance-mode-scheduler/
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • 4.01 automatic update today 4.29.11, now pages will NOT load at all, just hangs there saying connecting. Tried system restore, re-installing, only way to get it to work is to boot in safe mode. Running W7, 64bit, I5, 2.53 ghz, 4GB, ati radeon 5470

    I've been using firefox 4.01 Today (4.29.11) it downloaded an update and now the pages will not load. I tried disabling the add ons, system restore, un-installed/re-installed but the page hangs there and just says connecting. I booted in safe mode with networking and it worked. Running W7, 64Bit, I5, 2.53GHZ, 4GB, ATI mobility radeon 5470

    I've been using firefox 4.01 Today (4.29.11) it downloaded an update and now the pages will not load. I tried disabling the add ons, system restore, un-installed/re-installed but the page hangs there and just says connecting. I booted in safe mode with networking and it worked. Running W7, 64Bit, I5, 2.53GHZ, 4GB, ATI mobility radeon 5470

  • My Macbook Pro began crashing when I tried running safari. An error message came up asking that I repair the HD volume. Before I was able to run repair, computer shut down. Now I can not get it to boot. I tried booting in safe mode to no avail. Pls help.

    Can someone please help me with a Macbook Pro booting question?
    Recently, my mid 2010 MacBook Pro has started acting up when I am on Safari. Safari would unexpectedly crash multiple times in a row, then run fine for a while, then again, crash repeatedly. Not long after this began happening, an error message from my Prosoft Drive Genius 3 software, threw out a message that a serious error had occurred and I needed to repair the HD volume. Before I was able to run the software to repair the HD volume, the machine stopped working. I turned off the Macbook and when I tried to restart, it would not boot. I have since removed all external hardware from my machine and tried several times, unsuccessfully, to boot. I have also tried booting in safe mode when also does not work.
    Any suggestions?  Thanks for your help.

    Try SMC reset:
    Plug in the MagSafe power adapter to a power source, connecting it to the Mac if its not already connected.
    On the built-in keyboard, press the (left side) Shift-Control-Option keys and the power button at the same time. (four keys together)
    Release all the keys and the power button at the same time.
    Press the power button to turn on the computer. 

  • How do I boot into recovery mode with wireless keyboard

    I am unable to boot into recovery mode using either cmd-R or holding down the alt key. How is it supposed to work?
    Message was edited by: Niklas Brunberg (better title)

    I understand your concerns - I'm glad both of my machines are dual bootable (just in case) and I have my Snow Leopard install disks..... However, this is what Apple has decided, so we need to find the best way to deal with it. Personally, I've tested the recovery mode several times and, at least on my machine, it was not reliable (i.e. it wouldn't work - I tried it again and it did, but it spent more than an hour downloading the entire 4 GB installer again and then another 30 minutes installing it), and on a third try, I found that it had vanished because I had cloned by drive. When you clone, only your system is cloned, not the extra partition.
    So, I've decided to rely on a) my bootable clones and b) a copy of the installer (.dmg) in case my clones fail. I don't feel comfortable relying on something that requires an internet connection and a full download to work. But, that works for me; you may want to take a different approach.

Maybe you are looking for