Q: /etc/dfs/dfstab entries ignored during system start

I am wondering why entries in the dfstab file are ignored during system start.
The situation is as following:
I have an old x86 system acting as my home server. This machine runs Solaris 10 6/06.
I have the following entry in /etc/dfs/dfstab on this machine:
share -F nfs -o sec=sys,[email protected],[email protected] -d "home dirs" /export/home
in order to automount my home dir from my Blade 2000.
But this doesn't work as expected. After rebooting the server there are no shares available. I have manually log in the server and issue a "shareall".
BTW the nfs server is enabled by default:
svcs -a | grep nfs/server
online 15:16:53 svc:/network/nfs/server:default
What is wrong?
Regards,
Andreas

I think you need to do more than just change the file. I think the share command will do it all for you.
After editing the file, I think you need to run
exportfs -a    or
svcadm restart svc:/network/nfs/serverto make it see it.
or you can use the share command.
Like
share  /optwhich will make the entry for you and enable it.

Similar Messages

  • Mouse Freezes up during system start up

    So I recently reformatted my hard drive and installed XP instead of Vista.  I reinstalled all my drivers and updated my system but now I seem to be having a sporadic problem where maybe one out of every three system boots my mouse will freeze causing me to have to reboot.  I tried reinstalling the mouse drivers and BIOS but that hasn't helped the problem.  Any other advice?
    Thanks.

    Hello and welcome here!
    Please always tell us your model number, not S/N! That helps alot.
    Follow @LenovoForums on Twitter! Try the forum search, before first posting: Forum Search Option
    Please insert your type, model (not S/N) number and used OS in your posts.
    I´m a volunteer here using New X1 Carbon, ThinkPad Yoga, Yoga 11s, Yoga 13, T430s,T510, X220t, IdeaCentre B540.
    TIP: If your computer runs satisfactorily now, it may not be necessary to update the system.
     English Community       Deutsche Community       Comunidad en Español

  • Dfstab entries get invalidated on reboot in S10u6

    Hello,
    I have installed a Sparc workstation with the lastest Solaris release (Solaris 10 update 6: 10/2008) and with ZFS on all filesystems.
    To share directories via NFS, I use the traditional way, i.e. the /etc/dfs/dfstab file.
    In this file, I added entries similar to the following:
    share -F nfs -d "home directories" /export/home
    Enabling the NFS server service with:
    svcadm enable nfs/server
    works fine. The "share" command effectively displays the shared directories.
    However, when rebooting the machine, the NFS server service becomes disabled and the /etc/dfs/dfstab file automatically gets modified with:
    # Error: Syntax: share -F nfs -d "home directories" /export/home
    By the way, this happens for every share and whether the NFS server service was previously enabled or not.
    The modification date of the dfstab file indicates that the modification takes place during Solaris stop, not after the reboot.
    Also, in Solaris 10 10/2008, the dfstab file starts with the following warning:
    # Do not modify this file directly.
    # Use the sharemgr(1m) command for all share management
    # This file is reconstructed and only maintained for backward
    # compatibility. Configuration lines could be lost.
    However, the sharemgr utility is not present in this release.
    So how does one enable persitent NFS shares in Solaris 10 10/2008, apart from using "zfs set sharenfs=..." ?

    After performing some testing I have found the following: On a clean Solaris install the dfstab file works properly for sharing file systems over NFS. If you set any of the zfs file systems to “sharenfs=on” then the new file sharing system is “activated” and the dfstab file no longer functions in the traditional manner. The following message is prepended at the top of the /etc/dfs/dfstab file:
    # Do not modify this file directly.
    # Use the sharemgr(1m) command for all share management
    # This file is reconstructed and only maintained for backward
    # compatibility. Configuration lines could be lost.
    If you see this message at the top of you dfstab file you know the “new” method of sharing is now in effect and your dfstab will no longer function in the traditional sense. It appears that several OS operations automatically generate and overwrite the dfstab file for backward compatibility. If you had entries in the dfstab file BEFORE your system “switched” to using the new file sharing system, those shares will remain in the dfstab file and will continue to work properly. If you add new shares to the dfstab file AFTER the system has “switched” to the new file sharing system then those entries will be commented out and the following text will prepend the entry:
    # Error: Syntax:
    An example would be:
    # Error: Syntax: share -F nfs -o rw=bs1.sun.com /export/somefs
    You can manually remove the comment and error message and save the changes to the dfstab file and then run shareall and it will work in the traditional sense. However, the lines will be commented out again the next time any OS operation is performed that overwrites the dfstab file.
    I have found that rebooting the system overwrites the dfstab file. I was also told that creating or deleting a new pool or file system causes the dfstab to be overwritten also. I have not verified this but it seems logical that anytime an operation is performed that changes the configuration of the zfs pool that the dfstab file would be updated.
    This document explains the changes to NFS file sharing and why they were made. After reading this it makes sense that the dfstab file no longer functions in the traditional sense. I only wish I would have been familiar with this before installing a new NFS server. We would like to go back to using old NFS sharing technology but I cannot find any way to go back after the OS “switched” to the new system. We have our reasons for wanting to use the traditional methods for NFS sharing, although they may not be as efficient as the new method.
    http://developers.sun.com/solaris/articles/nfs_zfs.html

  • PDB doesn't take underscore parameters during database start

    Hello
    All underscore parameters which are set on a PDB level are ignored during database start.
    An underscore parameter is set on a PDB level in memory and spfile. After restarting database the parameter is still in pfile but not anymore in memory.
    SQL> show pdbs
        CON_ID CON_NAME                       OPEN MODE  RESTRICTED
             2 PDB$SEED                       READ ONLY  NO
             3 Q00A                           READ WRITE NO
    SQL> alter session set container=Q00A;
    SQL> alter system set -_push_join_union_view-=FALSE scope=both sid='*';
    SQL> show parameter -_push_join_union_view-
    NAME                                 TYPE        VALUE
    _push_join_union_view                boolean     FALSE
    SQL> show spparameter -_push_join_union_view-
    SID      NAME                          TYPE        VALUE
    *        _push_join_union_view         boolean     FALSE
    srvctl stop db -d cdbq00r; srvctl start db -d cdbq00r
    SQL> show parameter -_push_join_union_view-
    no rows
    SQL> show spparameter -_push_join_union_view-
    SID      NAME                          TYPE        VALUE
    *        _push_join_union_view         boolean     FALSE
    Thanks
    Venkat

    In the future please post multitenant questions in the Multitenant forum
    Multitenant
    Based on what you posted we can NOT tell WHAT database/PDB that last command is showing data for or what DBs are being started and opened.
    A STARTUP command only starts/opens the root/CDB by default. It does NOT open ANY PDBs.
    If you want PDBs to be opened at startup you need to create an AFTER STARTUP trigger to open them.
    The code you posted shows a startup but does NOT show if the PDB is open and does NOT show that the current container is the PDB when you check that parameter.
    Rerun your test and post ALL of the info needed that shows:
    1. the PDB is actually being opened,
    2. the current container is set to the PDB
    3. the value of the parameter for that PDB
    That is, AFTER you stop/restart the database use EXACTLY the same commands you used at the start: show the pdbs, change the container to the PDB and then show the parameter value.
    Post the FULL results of doing ALL of that.

  • Solaris 8 Boot Problems, random messages [/etc/rcS: /etc/dfs/sharetab:]

    Hello All,
    I have all of a sudden developed issues with booting up one of my Solaris 8 [V240] Servers. Upon a routine reboot, I was faced with the following errors:
    Feb 1 07:56:44 sco1-au-tci scsi: WARNING: /pci@1c,600000/scsi@2/sd@0,0 (sd0):
    Feb 1 07:56:44 sco1-au-tci Error for Command: read(10) Error Level: Retryable
    Feb 1 07:56:44 sco1-au-tci scsi: Requested Block: 114007888 Error Block: 114007903
    Feb 1 07:56:44 sco1-au-tci scsi: Vendor: SEAGATE Serial Number: 053532DN34
    Feb 1 07:56:44 sco1-au-tci scsi: Sense Key: Media Error
    Feb 1 07:56:44 sco1-au-tci scsi: ASC: 0x11 (unrecovered read error), ASCQ: 0x0, FRU: 0xf
    Feb 1 07:56:45 sco1-au-tci scsi: WARNING: /pci@1c,600000/scsi@2/sd@0,0 (sd0):
    Feb 1 07:56:45 sco1-au-tci Error for Command: read(10) Error Level: Fatal
    Feb 1 07:56:45 sco1-au-tci scsi: Requested Block: 114007888 Error Block: 114007903
    Feb 1 07:56:45 sco1-au-tci scsi: Vendor: SEAGATE Serial Number: 053532DN34
    Feb 1 07:56:45 sco1-au-tci scsi: Sense Key: Media Error
    Feb 1 07:56:45 sco1-au-tci scsi: ASC: 0x11 (unrecovered read error), ASCQ: 0x0, FRU: 0xf
    So I figured, Oh crap...the disk is messed up. However, on running a few scans, i.e. 'iostat -En' showed ALL errors to be '0'. In addition, I ran the format -> analyze -> read test which ran for about 10 or so hours and came back saying 0 errors found to be repaired. So it appears nothing particularly is wrong with my hardware. After the 2nd reboot, I didn't get the errors above anymore but now I can't seem to get past the single-user mode. I get the following errors.
    mount: the state of /dev/dsk/c1t0d0s0 is not okay
    and it was attempted to be mounted read/write
    mount: Please run fsck and try again
    /sbin/rcS: /etc/dfs/sharetab: cannot create
    failed to open /etc/coreadm.confsyseventd: Unable to open daemon lock file '/etc/sysevent/syseventd_lock': 'Read-only file system'
    INIT: Cannot create /var/adm/utmpx
    INIT: failed write of utmpx entry:" "
    INIT: failed write of utmpx entry:" "
    INIT: SINGLE USER MODE
    Type control-d to proceed with normal startup,
    (or give root password for system maintenance):
    single-user privilege assigned to /dev/console.
    Entering System Maintenance Mode
    I have read a whole bunch of stuff as I found on google, like /var being full (it's not), the WWN being wrong as compared between vfstab, /dev, and /devices directory etc. I don't know what is wrong and I don't know what to do to fix this. Any ideas as to why this happened and what I can do?
    PLEASE HELP!!!

    Hi Darren,
    thanks again for the response. Ok, so the question that I have right now is how do I fix my original issue pasted below:
    mount: the state of /dev/dsk/c1t0d0s0 is not okay
    and it was attempted to be mounted read/write
    mount: Please run fsck and try again
    /sbin/rcS: /etc/dfs/sharetab: cannot create
    failed to open /etc/coreadm.confsyseventd: Unable to open daemon lock file '/etc/sysevent/syseventd_lock': 'Read-only file system'
    INIT: Cannot create /var/adm/utmpx
    INIT: failed write of utmpx entry:" "
    INIT: failed write of utmpx entry:" "
    INIT: SINGLE USER MODEWill running fsck fix this too? This is a critical machine and I need to bring it up during worktime in the eastern part of the world. This server has worked totally fine for over 180 days with the invalid filesystem and the inability to run fsck. Any ideas on how to fix the errors above?
    Also
    Yes. It needs to be at least the size of the filesystem, which you've reported to be 143349312 sectors. There's no easy way to shrink a UFS filesystem.Does this mean that that I have to re-create the partitions/layout of the disk and reinstall the OS and applications? Is there any way to re-layout without destroying the data?
    Thanks
    \R

  • Update Database Statistics during System Export

    hi All,
    I am getting following error on step (Update Database Statistics ) during system export.
    WARNING 2011-06-17 15:13:35
    Execution of the command "E:\usr\sap\PRD\SYS\exe\uc\NTAMD64\
    brconnect.exe -u / -c -o summary -f stats -o SAPSR3 -t all -m +I -s
    P10 -f allsel,collect,method,precision,space,keep -p 4"
    finished with return code 5.
    Output: BR0801I BRCONNECT 7.00 (31)BR0805I Start of BRCONNECT processing:
    cegczbfe.sta 2011-06-17 15.12.02BR0484I BRCONNECT log file:
    G:\oracle\PRD\sapcheck\cegczbfe.sta
    ERROR 2011-06-17 15:13:35
    CJS-30023  Process call 'E:\usr\sap\PRD\SYS\exe\uc\NTAMD64\
    brconnect.exe -u / -c -o summary -f stats -o SAPSR3 -t all -m +I -s P10
    -f allsel,collect,method,precision,space,keep -p 4'
    exits with error code 5. For details see log file(s) brconnect.log.
    there is small part of brconnect.log is:
    BR0204I Percentage done: 27.88%, estimated end time: 15:16
    BR0001I **************____________________________________
    BR0280I BRCONNECT thread 2: time stamp: 2011-06-17 15.13.31
    BR0301E SQL error -20003 in thread 2 at location stats_tab_collect-20, SQL statement:
    'BEGIN DBMS_STATS.GATHER_TABLE_STATS (OWNNAME => '"SAPSR3"', TABNAME => '"/BEV3/CHCL_STAT"', ESTIMATE_PERCENT => NULL, METHOD_OPT => 'FOR ALL COLUMNS SIZE 1', DEGREE => NULL, CASCADE => TRUE, NO_INVALIDATE => FALSE); END;'
    ORA-20003: Specified bug number (5099019) does not exist
    ORA-06512: at "SYS.DBMS_STATS", line 14830
    ORA-06512: at "SYS.DBMS_STATS", line 14851
    ORA-06512: at line 1
    BR0886E Checking/collecting statistics failed for table SAPSR3./BEV3/CHCL_STAT
    BR0280I BRCONNECT thread 2: time stamp: 2011-06-17 15.13.31
    BR0301E SQL error -20003 in thread 2 at location stats_tab_collect-20, SQL statement:
    'BEGIN DBMS_STATS.GATHER_TABLE_STATS (OWNNAME => '"SAPSR3"', TABNAME => '"/BEV3/CHCL_STATT"', ESTIMATE_PERCENT => NULL, METHOD_OPT => 'FOR ALL COLUMNS SIZE 1', DEGREE => NULL, CASCADE => TRUE, NO_INVALIDATE => FALSE); END;'
    ORA-20003: Specified bug number (5099019) does not exist
    ORA-06512: at "SYS.DBMS_STATS", line 14830
    ORA-06512: at "SYS.DBMS_STATS", line 14851
    ORA-06512: at line 1
    BR0886E Checking/collecting statistics failed for table SAPSR3./BEV3/CHCL_STATT
    BR0280I BRCONNECT thread 2: time stamp: 2011-06-17 15.13.31
    BR0301E SQL error -20003 in thread 2 at location stats_tab_collect-20, SQL statement:
    'BEGIN DBMS_STATS.GATHER_TABLE_STATS (OWNNAME => '"SAPSR3"', TABNAME => '"/BEV3/CHCMVWOBJ"', ESTIMATE_PERCENT => NULL, METHOD_OPT => 'FOR ALL COLUMNS SIZE 1', DEGREE => NULL, CASCADE => TRUE, NO_INVALIDATE => FALSE); END;'
    ORA-20003: Specified bug number (5099019) does not exist
    ORA-06512: at "SYS.DBMS_STATS", line 14830
    ORA-06512: at "SYS.DBMS_STATS", line 14851
    ORA-06512: at line 1
    BR0886E Checking/collecting statistics failed for table SAPSR3./BEV3/CHCMVWOBJ
    BR0280I BRCONNECT thread 2: time stamp: 2011-06-17 15.13.31
    BR0301E SQL error -20003 in thread 2 at location stats_tab_collect-20, SQL statement:
    'BEGIN DBMS_STATS.GATHER_TABLE_STATS (OWNNAME => '"SAPSR3"', TABNAME => '"/BEV3/CHCMVWOBPR"', ESTIMATE_PERCENT => NULL, METHOD_OPT => 'FOR ALL COLUMNS SIZE 1', DEGREE => NULL, CASCADE => TRUE, NO_INVALIDATE => FALSE); END;'
    ORA-20003: Specified bug number (5099019) does not exist
    ORA-06512: at "SYS.DBMS_STATS", line 14830
    BEGIN DBMS_STATS.GATHER_TABLE_STATS (OWNNAME => '"SAPSR3"', TABNAME => '"/BEV3/CHCTLVAR"', ESTIMATE_PERCENT => NULL, METHOD_OPT => 'FOR ALL COLUMNS SIZE 1', DEGREE => NULL, CASCADE => TRUE, NO_INVALIDATE => FALSE); END;'
    ORA-20003: Specified bug number (5099019) does not exist
    ORA-06512: at "SYS.DBMS_STATS", line 14830
    ORA-06512: at "SYS.DBMS_STATS", line 14851
    ORA-06512: at line 1
    ORA-06512: at line 1
    BR0886E Checking/collecting statistics failed for table SAPSR3./BEV3/CHCTMSTFUB
    BR0280I BRCONNECT thread 2: time stamp: 2011-06-17 15.13.34
    BR0844E 100 errors occurred in thread 2 - terminating processing of the thread...
    BR0280I BRCONNECT time stamp: 2011-06-17 15.13.34
    BR0848I Thread 2 finished with return code -1
    BR0280I BRCONNECT time stamp: 2011-06-17 15.13.34
    BR0879I Statistics checked for 0 tables
    BR0878I Number of tables selected to collect statistics after check: 0
    BR0880I Statistics collected for 639/0 tables/indexes
    BR0889I Structure validated for 0/749/0 tables/indexes/clusters
    BR1308E Collection of statistics failed for 68292/0 tables/indexes
    BR0806I End of BRCONNECT processing: cegczbfe.sta 2011-06-17 15.13.34
    BR0280I BRCONNECT time stamp: 2011-06-17 15.13.35
    BR0804I BRCONNECT terminated with errors
    how can I resolve this issue?
    Regards,
    majamil

    hi SM,
    I have applied these patches with Opatch which successfully applied.
    is it ok or i had to use the MOpatch for these?
    one more thing for your information , when i execute Delete Harmful Statistics  under Database Statistics in brtools then i got following
    BR0280I BRCONNECT time stamp: 2011-06-30 08.10.53
    BR0818I Number of tables found in DBSTATC for owner SAPSR3: 390
    BR0280I BRCONNECT time stamp: 2011-06-30 08.10.53
    BR0807I Name of database instance: PRD
    BR0808I BRCONNECT action ID: cegfjniv
    BR0809I BRCONNECT function ID: dst
    BR0810I BRCONNECT function: stats
    BR0812I Database objects for processing: HARMFUL
    BR0852I Number of tables to delete statistics: 0
    BR0856I Number of indexes to delete statistics: 0
    BR0863W No tables/indexes found to update/delete statistics or validate structur
    e
    BR0806I End of BRCONNECT processing: cegfjniv.dst 2011-06-30 08.10.53
    BR0280I BRCONNECT time stamp: 2011-06-30 08.10.53
    BR0803I BRCONNECT completed successfully with warnings
    BR0292I Execution of BRCONNECT finished with return code 1
    BR0668I Warnings or errors occurred - you can continue to ignore them or go back
    to repeat the last action
    BR0280I BRTOOLS time stamp: 2011-06-30 08.10.53
    BR0670I Enter 'c[ont]' to continue, 'b[ack]' to go back, 's[top]' to abort:
    c
    why i am getting this return code 1?
    Regards,
    Edited by: majamil on Jun 30, 2011 8:20 AM

  • Create entry for remote system necessary?

    Hello,
    is it necessary to start in CEN transaction RZ21 u2192 Technical infrastructure u2192 Configure Central System u2192 Create entry for remote system.
    What is the result of this transaction and why is a <sid>adm user needed?
    Thanks

    Hello,
    I take you mean you have a JMS queue created in an Oracle database (A) and you want to propagate messages to a JMS queue create in an Oracle database (B)?
    If that is the case you use normal AQ propagation. You can follow <Note:102771.1> as an example changing the ADT as appropriate, etc.
    MGW is only to be used for Oracle to 3rd-party propagation.
    Thanks
    Peter

  • Clearing of GR/IR Account for initial stock entry into the system

    Hi All,
    We  MM team did some initial stock entry into the system using MVT TYPE 561 in migo transaction. But in the FI, When GR/IR clearing account was checked, these entry's caused by the initial stock entry are not cleared. when tried to clear It gives a message saying this cannot be manually cleared.Do any of you know what should be done in this case?

    I have one more Questiion, in my company there is some amount that is not balancing. We feel that is due to the initial stock entry of the Materials is there a way to view what was the value stock on a particular date by particular movement type, we have transaction in MM but that does not give the currency, in the currency column it says $0.00.

  • How to solve this "Prefix number: entry missing for system EC5 client 800"

    Hi,
       I am workflow learner, I am having theoretical knowledge on it.
    Now, I started to practice practically in my IDES server.
    I am getting an error message while I am trying to save my standard task with my object type and method and event (which were already created in SWO1).
      My error message details.....
    Prefix number: entry missing for system EC5 client 800
    Message no. 5W023
    Diagnosis
    Tasks, rules, and workflow definitions require an ID that is unique throughout all systems and clients. In this way, you can ensure that you can transport these objects from one system into another at any time (without restrictions). From a technical point of view, the uniqueness of the ID is ensured by what is known as a "prefix number". You can define a separate prefix number for every system and every client in table T78NR.
    System response
    If a prefix number is not defined in the client in which you are currently working, it is not possible to maintain (maintenance terminates).
    Even I tried to create on entry in the table T78NR, but as that is standard table, I am unable to make an entry.
    Hope we have to configure the workflow, I am not having any Basis consultant with me. Can anyone help to cross this problem.
    Thanks and Regards,
    Surender Batlanki.
    Edited by: Surender Batlanki on Oct 15, 2008 2:20 PM

    HI Surender and  Other SAP Work flow gurus
    Surender, u said, that you got the solution.
    As honestly speaking i am new to SAP Workflow. Can u please please guide me the instruction and document if any. I am practising in Test Server machine. I will be really happy that if u all GEMS helps me to get out of it.
    in Prefix number OOW4, system is asking me to enter, the Prefix number, Interval start, Interval End, and Check Sum.
    I entere the same 980 in prefix number, and i dont know what to enter the Interval start and interval end and check sum.
    Also there is some msg coming when i tried to manage some values, " NO Intervals can be reserved for this prefix number" - MSg No: 5W179.
    Please gurus help me.
    Regard
    Guru

  • Check HDD Part Number in Solaris 10 during System Running

    Dear all,
    Please help me,
    I want to check the part number of HDD of Sun Fire V890 Server during system is running by the Solaris OS command. Can we check that part number of HDD by solaris OS command?
    Or have other ways, except shutdown the system and unplug the HDD from the Server.
    Thanks you and Regards,
    Soret,

    The following command will list the vendor and product ID for each disk:
    iostat -EFrom [Sun Fire[tm] V890 Server, RoHS:YL - Full Components List|http://sunsolve.sun.com/handbook_private/validateUser.do?target=Systems/SunFireV890_R/components#Disks] you should be able to find the matching Manufacturing Part for a given vendor and product ID.

  • No Logical System Entry in Business System

    When I will send an IDOC from XI to a target R/3 system, I am getting an error "Unable to convert the sender service to an ALE logical system "
    I checked the threads and the problem seems to be, that there is no Logical System recognised by the Business System.
    I'll checked the logical System in the Technical System in SLD, and it is there.
    I checked SALE, and the entry is in the systems.
    But when I go to the Integration Directory and check System-->Adapter Specific Informations, I see an empty entry for "Logical System". The field stays un-editable and when I hit the "Sync with SLD" button, nothing happens.
    Any idea, how to either enter the logical system or make it recognise the SLD entry?
    Thanks for any hints, points will surely be awarded

    Hi,
    Have you looked int XI FAQ by Michal?
    /people/michal.krawczyk2/blog/2005/06/28/xipi-faq-frequently-asked-questions
    Have a look into the weblog
    /people/michal.krawczyk2/blog/2005/03/29/xi-error--unable-to-convert-the-sender-service-to-an-ale-logical-system
    which addresses your problem.
    Regards,
    Jai Shankar

  • Error at step Run ABAP Reports during system copy in HPUX

    Hi experts,
    During system copy from Prodcution to Development i am facing the following errror at step
    Run ABAP Reports
    INFO 2009-03-18 03:57:36
    Information for application function INST_EXECUTE_REPORT copied to local Repository.
    INFO 2009-03-18 03:57:36
    Function module INST_EXECUTE_REPORT set successfully.
    INFO 2009-03-18 03:57:36
    Executing function call INST_EXECUTE_REPORT.
    ERROR 2009-03-18 03:57:38
    FRF-00025  Unable to call function. Error message: Exception condition "WRITE_FAILED" raised. .
    INFO 2009-03-18 03:57:38
    RFC connection closed.
    ERROR 2009-03-18 03:57:38
    MUT-03025  Caught ERfcExcept in Modulecall: Exception condition "WRITE_FAILED" raised..
    ERROR 2009-03-18 03:57:38
    FCO-00011  The step runRADDBDIF with step key |NW_Doublestack_OneHost|ind|ind|ind|ind|0|0|NW_Onehost_System|ind|ind|ind|ind|1|0|NW_CI_Instance|ind|ind|ind|ind|11|0|NW_CI_Instance_ABAP_Reports|ind|ind|ind|ind|2|0|runRADDBDIF was executed with status ERROR .
    Please let me know the solution
    Thanks & Regards,
    Arun

    This seems to be related with permission to write in /usr/sap/trans
    Read,
    Re: Run ABAP Reports error during ECC 6.0 installation on Win2003 with Oracle
    Exception condition "WRITE_FAILED" raised during installation 4.0
    and
    Install error at Phase 33
    Regards
    Juan

  • Error during system copy

    Hello Experts,
                               I am doing a heterogeneous copy. During the Start Instance phase of the copy the instance fails to start. The services are started, the DB is up too, everything looks fine but still copy is not continuing. Here are my findings.
    Below is the response of niping -v
    Hostname/Nodeaddr verification:
    ===============================
    Hostname/Nodeaddr verification:
    ===============================
    Hostname of local computer: newsaptest                       (NiMyHostName)
    Lookup of hostname: newsaptest                               (NiHostToAddr)
        --> IP-Addr.: 10.1.1.207
    Lookup of IP-Addr.: 10.1.1.207                               (NiAddrToHost)
        --> Hostname: newsaptest
    Lookup of hostname: localhost                                (NiHostToAddr)
        --> IP-Addr.: 127.0.0.1
    Lookup of IP-Addr.: 127.0.0.1                                (NiAddrToHost)
        --> Hostname: localhost
    External ip: 127.0.0.1
    Internal Ip: 10.1.1.207.
    Below is the error msg from /work folder
    CPICTRC7378 file says: ERROR => NiPConnect2: SiPeekPendConn failed for hdl 3 / sock 13
        (SI_ECONN_REFUSE/111; I4; ST; 10.1.1.207:3306) [nixxi.cpp    2770]
    ERROR => GwIConnect: GwConnect to newsaptest / 3306 failed (rc=NIECONN_REFUSED) [gwxx_mt.c    296]
    dev_w says:
    ES initialized.
    B  db_con_shm_ini:  WP_ID = 9, WP_CNT = 17, CON_ID = -1
    I  *** ERROR => shmat(39452703,0x(nil),SHM_RND) (12: Cannot allocate memory) [shmux.c      1597]
    B  dbtbxbuf: Shm Segment 19: Cannot attach
    B  ***LOG BBB=> ADM message TBX buffer: function shmcreate0 returns RC = 256        [dbtbxbuf#2 @ 16094] [dbtbxbuf1609 4]
    B  ***LOG BZL=> internal error in table buffer: table buf  init fail   [dbtbxbuf#2 @ 1701] [dbtbxbuf1701 ]
    B  dbtbxbuf: return code (sap_rc): 2,      Buffer TBX_GENERIC will not be available
    B  db_tblinit failed
    M  *** ERROR => ThCallHooks: event handler db_init for event CREATE_SHM failed [thxxtool3.c  261]
    M  *** ERROR => ThIPCInit: hook failed [thxxhead.c   2084]
    M  ***LOG R19=> ThInit, ThIPCInit ( TSKH-IPC-000001) [thxxhead.c   1523]
    M  in_ThErrHandle: 1
    M  *** ERROR => ThInit: ThIPCInit (step 1, th_errno 17, action 3, level 1) [thxxhead.c   10468]
    Can someone please help here.
    Thanks!
    Rahul.

    >                            I am doing a heterogeneous copy
    Just for completion: You need to have a certified migration consultant on-site to do heterogeneous migration, otherwise you'll loose SAP support for the target system (see http://service.sap.com/osdbmigration --> FAQ).
    > dev_w says:
    >
    > ES initialized.
    > B  db_con_shm_ini:  WP_ID = 9, WP_CNT = 17, CON_ID = -1
    > I  *** ERROR => shmat(39452703,0x(nil),SHM_RND) (12: Cannot allocate memory) [shmux.c      1597]
    > B  dbtbxbuf: Shm Segment 19: Cannot attach
    > B  ***LOG BBB=> ADM message TBX buffer: function shmcreate0 returns RC = 256        [dbtbxbuf#2 @ 16094] [dbtbxbuf1609 4]
    > B  ***LOG BZL=> internal error in table buffer: table buf  init fail   [dbtbxbuf#2 @ 1701] [dbtbxbuf1701 ]
    > B  dbtbxbuf: return code (sap_rc): 2,      Buffer TBX_GENERIC will not be available
    > B  db_tblinit failed
    This seems to be an OS specific IPC shared memory configuration issue.
    What OS do you use?
    Markus

  • Error: Prefix number: entry missing for system REP client 113  Msg no 5W023

    Hi, we're trying to study Workflow based on the manuals we downloaded online.  Upon saving, we encountered the error: Error: Prefix number: entry missing for system REP client 113  Msg no 5W023.
    Here's the diagnosis:
    Diagnosis
    Tasks, rules, and workflow definitions require an ID that is unique throughout all systems and clients. In this way, you can ensure that you can transport these objects from one system into another at any time (without restrictions). From a technical point of view, the uniqueness of the ID is ensured by what is known as a "prefix number". You can define a separate prefix number for every system and every client in table T78NR.
    System Response
    If a prefix number is not defined in the client in which you are currently working, it is not possible to maintain (maintenance terminates).
    Procedure
    Maintain table T78NR in Customizing, and create an entry for the system in question and the current client.
    Hope somebody can help me understand what this is.. thanks so much and appreciate your help..
    Thanks,
    Angela Paula

    Hi Angela,
    Please check if the Prefix number is maintained in SWU3 t-code.
    Go to Maintain Definition Environment -> Maintain Prefix Numbers and check if the prefix number is maintained.
    Hope this helps!
    Regards,
    Saumya

  • Attach CSV volumes from filter driver during system startup

    Hi,
    We have written a filter driver to track Hyper-V CSV volumes in order to track the modifications in those volumes for backup purpose. When the Hyper-V host is running, we are able to attach the CSV volumes from the driver without any issues. But during Hyper-V
    host startup, our driver failed to attach the csv volumes.
    We suspect that filter driver failed to attach CSV volume, since Cluster service was not started at that point of time during the system start up. If we attach the CSV volume later, it works. However, for the continuos tracking we want our driver
    to track the modifications from the system startup itself. I believe, we need to load the filter driver once after the csv service is started.  
    Configurations of filter driver are as following(inf file).
    DisplayName      = %ServiceName%
    Description      = %ServiceDescription%
    ServiceBinary    = %12%\%DriverName%.sys        ;%windir%\system32\drivers\
    Dependencies     = FltMgr
    ServiceType      = 2                            ;SERVICE_FILE_SYSTEM_DRIVER
    StartType        = 2                            ;SERVICE_SYSTEM_START
    ErrorControl     = 1                            ;SERVICE_ERROR_NORMAL
    LoadOrderGroup   = "FSFilter Activity Monitor"
    AddReg           = Minispy.AddRegistry
    ;Instances specific information.
    DefaultInstance         = "Minispy - Top Instance"
    Instance1.Name          = "Minispy - Middle Instance"
    Instance1.Altitude      = "370000"
    Instance1.Flags         = 0x1          ; Suppress automatic attachments
    Instance2.Name          = "Minispy - Bottom Instance"
    Instance2.Altitude      = "361000"
    Instance2.Flags         = 0x1          ; Suppress automatic attachments
    Instance3.Name          = "Minispy - Top Instance"
    Instance3.Altitude      = "385100"
    Instance3.Flags         = 0x1          ; Suppress automatic attachment
    Can you please let us know how to fix this issue? Whether we need to change any configuration in inf file?
    For Online backup use StoreGrid. Its really cool

    Hi,
    We have written a filter driver to track Hyper-V CSV volumes in order to track the modifications in those volumes for backup purpose. When the Hyper-V host is running, we are able to attach the CSV volumes from the driver without any issues. But during Hyper-V
    host startup, our driver failed to attach the csv volumes.
    We suspect that filter driver failed to attach CSV volume, since Cluster service was not started at that point of time during the system start up. If we attach the CSV volume later, it works. However, for the continuos tracking we want our driver
    to track the modifications from the system startup itself. I believe, we need to load the filter driver once after the csv service is started.  
    Configurations of filter driver are as following(inf file).
    DisplayName      = %ServiceName%
    Description      = %ServiceDescription%
    ServiceBinary    = %12%\%DriverName%.sys        ;%windir%\system32\drivers\
    Dependencies     = FltMgr
    ServiceType      = 2                            ;SERVICE_FILE_SYSTEM_DRIVER
    StartType        = 2                            ;SERVICE_SYSTEM_START
    ErrorControl     = 1                            ;SERVICE_ERROR_NORMAL
    LoadOrderGroup   = "FSFilter Activity Monitor"
    AddReg           = Minispy.AddRegistry
    ;Instances specific information.
    DefaultInstance         = "Minispy - Top Instance"
    Instance1.Name          = "Minispy - Middle Instance"
    Instance1.Altitude      = "370000"
    Instance1.Flags         = 0x1          ; Suppress automatic attachments
    Instance2.Name          = "Minispy - Bottom Instance"
    Instance2.Altitude      = "361000"
    Instance2.Flags         = 0x1          ; Suppress automatic attachments
    Instance3.Name          = "Minispy - Top Instance"
    Instance3.Altitude      = "385100"
    Instance3.Flags         = 0x1          ; Suppress automatic attachment
    Can you please let us know how to fix this issue? Whether we need to change any configuration in inf file?
    For Online backup use StoreGrid. Its really cool

Maybe you are looking for

  • Looking for app to extend alarm for iCal - Reminders

    We are frustrated seniors new to the apple family. On all devices iMac, iPhone and iPad we only receive alarms when we either pick them up or turn them on.... We are looking for an app that would alarm us for appointments, even if it is only through

  • Can I add missing businesses to Apple Maps myself?

    Hey guys, Ok I've already posted about the woeful lack of business data in my area on Apple Maps. While we could all moan and complain until the skies fall it won't make anything better. So I'm wondering, what did apple mean when they said that the m

  • Oracle Application Server 10.1.3.1 installation issue

    Hi, I am trying to install Oracle Applicaiton server on linux and i have installed it on similar boxes before without any issues, however this time around after i am done with the installation, it is not showing me the enterprise console. It gives a

  • Internal concurrent manager is 0

    After completion of cloning, internal manager is 0.,but other managers are 1and 1.,can anyone help me?

  • MBA running windows 7 doesnt work with LED cinema display

    Hi - got a brand new 13" MBA (one week before they announced the new one ) running Windows 7 via bootcamp but can't use my new LED cinema display - am I missing some drivers?