Granting Limitations OS 10.3.9

In my mac I have the Administrator Account and a second account. When I set the limitation on the second account, they keep changing after I log out of both accounts. EXample:
I set it to allow applications, change to another area in Preferences and returned only to find that the applications was unmarked. I finally got it to hold but I am sure that when I sign out of the computer and sign back in I will have to reset it.

The problem of user limitations settings being lost is frequently reported among 10.3 users. There seems to be more than one issue, but since you reported that you were successful in getting the selection to "hold" at one point, I suspect yours might be the problem where the "Limitations" tab of the "Accounts" pref pane is unable to read the existing settings, consequently writing out blank settings when the tab is closed. With this bug, it is the act of checking the settings that causes them to be lost. I am not aware of what the root of the problem is, however.
In terms of workarounds, many users find the limitations work fine as long as they are able to resist the temptation to open the "Limitations" tab for that user. If it is not practical to test the settings directly, it is possible to view the existing settings without going through "System Preferences", eg. from the command line. Try opening "/Appliciations" > "Utilities" > "Terminal.app", and enter the command below, substituting the "managed" user's shortname in the appropriate place:
<pre>nicl . -read /users/username mcx_settings</pre>
You could re-run the command periodically to make sure they haven't changed...

Similar Messages

  • Limited privileges for ReSA users

    Hi Experts,
    Can someone help me create users in Oracle Retail Sales Audit. Granting limited privileges to RMS users that only can only access Sales Audit or what script shall I use
    to grant limited privileges to roles like Manager and accounting Clerk?
    Thanks,
    Jeremy

    You may be able to do things with a script.
    Typical "Changing the EUL tables is a risky thing and could cause all sorts of problems..." disclaimers apply.
    I'm not sure how things work with responsibilities, but here's how they work for users.
    The query governor restrictions are stored in the EUL5EUL_USERS table. The "Warn user if predicted time exceeds..." value is stored in the EU_QUERY_EST_LMT column. The "Prevent queries from running longer than..." value is stored in the EU_QUERY_TIME_LMT column. The "Limit retrieved data to..." value is stored in the EU_ROW_FETCH_LIMIT column.
    You should be able to update these values with a simple update statement. Setting the values to 0 essentially acts as if there is no limit

  • Limited Access not working as intended

    In SharePoint 2010 when Limited Access (fine grain permissions) is enabled you could assign permissions for a user on a list item or list etc. SharePoint would then grant Limited Access permissions for that same user on parent scopes. This allowed the user
    to have access to shared resources such as navigation and master pages allowing him to traverse to the target item or list without having access to any other content.
    I can't seem to replicate this in SharePoint 2013. I have the following features:
    SharePoint Server Publishing Infrastructure: Enabled (site collection feature)
    Limited-access user permission lockdown mode: Disabled (site collection feature)
    SharePoint Server Publishing: Enabled on all sites (Site feature)
    If I assign permission to a user for a list. The user can view that list via URL, but cannot view the parent webs/sites. Cannot view the navigation (site scope).
    If anyone has any idea as to why Limited Access isn't working as I hoped it would, I would be very appreciative of any help!
    Regards,
    Damien

    Since Publishing is enabled, the pages are stored in the Pages library and CSS etc is stored in the Style library.  Make sure that users have access to those locations.  If they don't then having limited access won't allow viewing the default
    page and navigation.  I suspect in 2010 you were using sites where the default page was stored directly in the site root folder rather than in a library.  Changes in 2013 moved away from that design.  So limited access normally won't allow display
    of the root site unless you also give users access to the page and any additional required material.
    Paul Stork SharePoint Server MVP
    Principal Architect: Blue Chip Consulting Group
    Blog: http://dontpapanic.com/blog
    Twitter: Follow @pstork
    Please remember to mark your question as "answered" if this solves your problem.

  • Help: Firefox does not work in XP Pro limited account

     Firefox 3 only works in administrator accounts. I tried:
    - To grant full control to Firefox folder and to grant limited user to run Firefox
    - To change the type of limited user ABC to administrator account and installed Firefox 3, then change ABC back to limited account. Firefox does not work.
    I have similar problem with Opera.
    Other programs such as IE, MS Office, Adobe,... work fine. Diskeeper and PC-Doctor require admin privileges.
    My machine is X61. I have Kaspersky IS 7.0 installed.

    Hi,
    Thanks for  your reply.
    Yes I know Firefox can work without admin privileges. I just have no clue why it does not work in my case.
    I installed Firefox from administrator account, following on screen instruction. I did not modify anything. It installed into Mozilla Firefox folder under Programs Files folder.
    By "does not work" I mean when in "limited account" (not in admin account) I doubleclicked the Firefox icon on the desktop nothing happened. Same situation with Opera.
    I found a way to run Firefox: I rightclick the Firefox icon and choose "run as..." In the dialogbox I choose "administrator" and key in the password. It works normally. Also work with Opera.
    I know Thinkpad has its own security system. I just can not figure out whether this system could intervene XP Pro operations or not. I think this should not happen.
    Hi admin,
    I'm sorry I accidentally post the above reply creating a new thread. Please help delete. Thank you.

  • User cannot access table

    hi i created a new user
    CREATE USER "sue" PROFILE "DEFAULT" IDENTIFIED BY "dbsuepwd" DEFAULT TABLESPACE "ERDBPERM"
    TEMPORARY TABLESPACE "ERDBTEMP" ACCOUNT UNLOCK;
    and granted limited access privalges so the users can insert row on one table but not delete any data.
    GRANT CREATE SESSION TO sue;
    GRANT INSERT ON EXPENSEREPORT TO "sue";
    but now when i connect to database using sql developer usind sue account iam not able to insert data in expensereport table
    Error report:
    SQL Error: ORA-00942: table or view does not exist
    00942. 00000 - "table or view does not exist"
    *Cause:
    *Action:
    help me both user and table exists on same tablespace.
    plz tell which privilages i used give to the user.

    not working
    tablespace - ERDBPERM
    schema - ERDB
    table created by ERDB user
    tried
    INSERT INTO ERDB.EXPENSEREPORT (erno, erdesc, ersubmitdate, erstatusdate, erstatus, submituserno,
    appruserno) VALUES (EXPENSEREPORT_SEQ.NEXTVAL, 'Sales Presentation', TO_DATE('2007-08-10',
    'yyyy-mm-dd'), TO_DATE('2007-08-26', 'yyyy-mm-dd'), 'APPROVED', 2003, 2004);
    error
    Error starting at line 1 in command:
    INSERT INTO ERDB.EXPENSEREPORT (erno, erdesc, ersubmitdate, erstatusdate, erstatus, submituserno, appruserno)
    VALUES (EXPENSEREPORT_SEQ.NEXTVAL, 'Sales Presentation', TO_DATE('2007-08-10', 'yyyy-mm-dd'), TO_DATE('2007-08-26', 'yyyy-mm-dd'), 'APPROVED', 2003, 2004)
    Error at Command Line:2 Column:8
    Error report:
    SQL Error: ORA-02289: sequence does not exist
    02289. 00000 - "sequence does not exist"
    *Cause:    The specified sequence does not exist, or the user does
    not have the required privilege to perform this operation.
    *Action:   Make sure the sequence name is correct, and that you have
    the right to perform the desired operation on this sequence.

  • All triggers went invalid

    Hi All,
    Today morning when I was looking into the database I saw that all the triggers in the schema got invalid. Then I checked the alert log but no errors. And even I was unable to find
    any suspected activity or a query.
    What can be the reason.
    SQLTRACE is ON and can any body tell any suggestions or reasons or any queries with TKPROF.
    Thanks,
    G.

    Thanks Laurent,
    This is the thing that I have done previously
    If you revoke privileges from a schema object, dependent objects are cascade invalidated.The schema has DBA privilege and i revoked it and then granted limited privileges.
    But When I grant privileges Dont they get automatically valid?
    This what I have done.
    Revoke dba from schema1;
    grant all privileges to schema1;
    revoke insert any table from schema1;
    Thanks,
    G.

  • Network User with Local Admin Privileges?

    I have a small network (around 25 clients total) that was setup prior to my arrival. Each client has its own unique local admin (each machine was setup by the individual user) and it's become somewhat daunting to support them.
    All of the machines are connected (but not specifically bound) to an Open Directory and each is accessible via Remote Desktop, however I cannot push software updates, etc. without local admin privileges.
    I'd rather not create an account on each machine, nor do I want to completely lock down each computer (I'd like them to still have the flexibility to be admins so they can install apps, etc.)
    Is it possible to authenticate against OD and obtain local admin privileges?

    Yes.
    You can wipe all account information and then recreate a common initial admin account. This will make administration far easier as all machines will have the same admin username/password combination. Next, bind all of the systems to the domain and create domain accounts for all users on the server (likely already exist). Log in as the domain accounts and migrate permissions to domain ids. Finally, promote the user to the local admin group through System Preferences > Accounts on the workstation. You must enable the account as a mobile account in Workgroup Manager first. If you do not, the account will not cache to the workstation and you will be unable to add it to the admin group.
    Also, in a workgroup of 25, I would recommend rethinking the decision to grant local admin access to end users. This is asking for trouble as you will have no control over when updates are applied or even if they are. In theory (and probably in practice), you will have 25 completely different machines configurations. This is far harder to manage and troubleshoot than 25 systems with different admin accounts.
    If you must provide some level of autonomy, while not trivial, you might want to consider modifying /etc/authorization and granting limited admin rights to the users.
    Hope this helps - congrats on the opportunity

  • Grant execute via role has odd limitations

    Hi,
    I have execute grants (with admin option) on a pl/sql package via a role. When I try to compile a pl/sql procedure that uses functions/procedures from that package, I get error
    PLS-00201: identifier '<packagename>' must be declared
    So the only workaround I see is to grant the execute rights (with grant option) straight to my userid, instead of via a role. That seems very odd.
    Can someone tell me the subtle thoughts behind this??
    Regards, Paul.

    - Definer's rights stored procedures (the default) cannot see privileges granted to a role. Invoker's rights stored procedures, however, can access privileges granted to roles. I don't expect this to help you much in general-- using all invoker's rights stored procedures probably increases your grant management issues-- but it may be an option for some.
    - Among the problems with allowing a definer's rights stored procedure to access privileges granted through a role is that roles can be enabled and disabled at runtime for a particular session and can be password protected. If your session enables a password protected role that has access to procedure A in one session, and you create a procedure B that calls A, Oracle would have no way of knowing whether some other session created with the same user account some time later still remembered the password. You'd have similar problems with non-default roles (or roles that were made non-default) for the owner of a procedure.
    Since there are relatively few schemas in a system that will own procedures and relatively many user accounts that need to execute those procedures, it also makes sense to require DBAs to grant privileges to application user accounts directly since that has the side effect of preventing normal users that can execute a stored procedure from creating (and relying) on stored procedures or views inadvertently. If you've ever come across a system where some random user accidentally had a view promoted in their schema rather than in a shared schema that some critical report relies on (ideally if you see this the day after that random user resigns and the security folks drop his schema) you'd appreciate the rule that normal user grants should be via roles while application user grants should be direct.
    Justin

  • 2GB OR NOT 2GB - FILE LIMITS IN ORACLE

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-11
    2GB OR NOT 2GB - FILE LIMITS IN ORACLE
    ======================================
    Introduction
    ~~~~~~~~~~~~
    This article describes "2Gb" issues. It gives information on why 2Gb
    is a magical number and outlines the issues you need to know about if
    you are considering using Oracle with files larger than 2Gb in size.
    It also
    looks at some other file related limits and issues.
    The article has a Unix bias as this is where most of the 2Gb issues
    arise but there is information relevant to other (non-unix)
    platforms.
    Articles giving port specific limits are listed in the last section.
    Topics covered include:
    Why is 2Gb a Special Number ?
    Why use 2Gb+ Datafiles ?
    Export and 2Gb
    SQL*Loader and 2Gb
    Oracle and other 2Gb issues
    Port Specific Information on "Large Files"
    Why is 2Gb a Special Number ?
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Many CPU's and system call interfaces (API's) in use today use a word
    size of 32 bits. This word size imposes limits on many operations.
    In many cases the standard API's for file operations use a 32-bit signed
    word to represent both file size and current position within a file (byte
    displacement). A 'signed' 32bit word uses the top most bit as a sign
    indicator leaving only 31 bits to represent the actual value (positive or
    negative). In hexadecimal the largest positive number that can be
    represented in in 31 bits is 0x7FFFFFFF , which is +2147483647 decimal.
    This is ONE less than 2Gb.
    Files of 2Gb or more are generally known as 'large files'. As one might
    expect problems can start to surface once you try to use the number
    2147483648 or higher in a 32bit environment. To overcome this problem
    recent versions of operating systems have defined new system calls which
    typically use 64-bit addressing for file sizes and offsets. Recent Oracle
    releases make use of these new interfaces but there are a number of issues
    one should be aware of before deciding to use 'large files'.
    What does this mean when using Oracle ?
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    The 32bit issue affects Oracle in a number of ways. In order to use large
    files you need to have:
    1. An operating system that supports 2Gb+ files or raw devices
    2. An operating system which has an API to support I/O on 2Gb+ files
    3. A version of Oracle which uses this API
    Today most platforms support large files and have 64bit APIs for such
    files.
    Releases of Oracle from 7.3 onwards usually make use of these 64bit APIs
    but the situation is very dependent on platform, operating system version
    and the Oracle version. In some cases 'large file' support is present by
    default, while in other cases a special patch may be required.
    At the time of writing there are some tools within Oracle which have not
    been updated to use the new API's, most notably tools like EXPORT and
    SQL*LOADER, but again the exact situation is platform and version specific.
    Why use 2Gb+ Datafiles ?
    ~~~~~~~~~~~~~~~~~~~~~~~~
    In this section we will try to summarise the advantages and disadvantages
    of using "large" files / devices for Oracle datafiles:
    Advantages of files larger than 2Gb:
    On most platforms Oracle7 supports up to 1022 datafiles.
    With files < 2Gb this limits the database size to less than 2044Gb.
    This is not an issue with Oracle8 which supports many more files.
    In reality the maximum database size would be less than 2044Gb due
    to maintaining separate data in separate tablespaces. Some of these
    may be much less than 2Gb in size.
    Less files to manage for smaller databases.
    Less file handle resources required
    Disadvantages of files larger than 2Gb:
    The unit of recovery is larger. A 2Gb file may take between 15 minutes
    and 1 hour to backup / restore depending on the backup media and
    disk speeds. An 8Gb file may take 4 times as long.
    Parallelism of backup / recovery operations may be impacted.
    There may be platform specific limitations - Eg: Asynchronous IO
    operations may be serialised above the 2Gb mark.
    As handling of files above 2Gb may need patches, special configuration
    etc.. there is an increased risk involved as opposed to smaller files.
    Eg: On certain AIX releases Asynchronous IO serialises above 2Gb.
    Important points if using files >= 2Gb
    Check with the OS Vendor to determine if large files are supported
    and how to configure for them.
    Check with the OS Vendor what the maximum file size actually is.
    Check with Oracle support if any patches or limitations apply
    on your platform , OS version and Oracle version.
    Remember to check again if you are considering upgrading either
    Oracle or the OS in case any patches are required in the release
    you are moving to.
    Make sure any operating system limits are set correctly to allow
    access to large files for all users.
    Make sure any backup scripts can also cope with large files.
    Note that there is still a limit to the maximum file size you
    can use for datafiles above 2Gb in size. The exact limit depends
    on the DB_BLOCK_SIZE of the database and the platform. On most
    platforms (Unix, NT, VMS) the limit on file size is around
    4194302*DB_BLOCK_SIZE.
    Important notes generally
    Be careful when allowing files to automatically resize. It is
    sensible to always limit the MAXSIZE for AUTOEXTEND files to less
    than 2Gb if not using 'large files', and to a sensible limit
    otherwise. Note that due to <Bug:568232> it is possible to specify
    an value of MAXSIZE larger than Oracle can cope with which may
    result in internal errors after the resize occurs. (Errors
    typically include ORA-600 [3292])
    On many platforms Oracle datafiles have an additional header
    block at the start of the file so creating a file of 2Gb actually
    requires slightly more than 2Gb of disk space. On Unix platforms
    the additional header for datafiles is usually DB_BLOCK_SIZE bytes
    but may be larger when creating datafiles on raw devices.
    2Gb related Oracle Errors:
    These are a few of the errors which may occur when a 2Gb limit
    is present. They are not in any particular order.
    ORA-01119 Error in creating datafile xxxx
    ORA-27044 unable to write header block of file
    SVR4 Error: 22: Invalid argument
    ORA-19502 write error on file 'filename', blockno x (blocksize=nn)
    ORA-27070 skgfdisp: async read/write failed
    ORA-02237 invalid file size
    KCF:write/open error dba=xxxxxx block=xxxx online=xxxx file=xxxxxxxx
    file limit exceed.
    Unix error 27, EFBIG
    Export and 2Gb
    ~~~~~~~~~~~~~~
    2Gb Export File Size
    ~~~~~~~~~~~~~~~~~~~~
    At the time of writing most versions of export use the default file
    open API when creating an export file. This means that on many platforms
    it is impossible to export a file of 2Gb or larger to a file system file.
    There are several options available to overcome 2Gb file limits with
    export such as:
    - It is generally possible to write an export > 2Gb to a raw device.
    Obviously the raw device has to be large enough to fit the entire
    export into it.
    - By exporting to a named pipe (on Unix) one can compress, zip or
    split up the output.
    See: "Quick Reference to Exporting >2Gb on Unix" <Note:30528.1>
    - One can export to tape (on most platforms)
    See "Exporting to tape on Unix systems" <Note:30428.1>
    (This article also describes in detail how to export to
    a unix pipe, remote shell etc..)
    Other 2Gb Export Issues
    ~~~~~~~~~~~~~~~~~~~~~~~
    Oracle has a maximum extent size of 2Gb. Unfortunately there is a problem
    with EXPORT on many releases of Oracle such that if you export a large table
    and specify COMPRESS=Y then it is possible for the NEXT storage clause
    of the statement in the EXPORT file to contain a size above 2Gb. This
    will cause import to fail even if IGNORE=Y is specified at import time.
    This issue is reported in <Bug:708790> and is alerted in <Note:62436.1>
    An export will typically report errors like this when it hits a 2Gb
    limit:
    . . exporting table BIGEXPORT
    EXP-00015: error on row 10660 of table BIGEXPORT,
    column MYCOL, datatype 96
    EXP-00002: error in writing to export file
    EXP-00002: error in writing to export file
    EXP-00000: Export terminated unsuccessfully
    There is a secondary issue reported in <Bug:185855> which indicates that
    a full database export generates a CREATE TABLESPACE command with the
    file size specified in BYTES. If the filesize is above 2Gb this may
    cause an ORA-2237 error when attempting to create the file on IMPORT.
    This issue can be worked around be creating the tablespace prior to
    importing by specifying the file size in 'M' instead of in bytes.
    <Bug:490837> indicates a similar problem.
    Export to Tape
    ~~~~~~~~~~~~~~
    The VOLSIZE parameter for export is limited to values less that 4Gb.
    On some platforms may be only 2Gb.
    This is corrected in Oracle 8i. <Bug:490190> describes this problem.
    SQL*Loader and 2Gb
    ~~~~~~~~~~~~~~~~~~
    Typically SQL*Loader will error when it attempts to open an input
    file larger than 2Gb with an error of the form:
    SQL*Loader-500: Unable to open file (bigfile.dat)
    SVR4 Error: 79: Value too large for defined data type
    The examples in <Note:30528.1> can be modified to for use with SQL*Loader
    for large input data files.
    Oracle 8.0.6 provides large file support for discard and log files in
    SQL*Loader but the maximum input data file size still varies between
    platforms. See <Bug:948460> for details of the input file limit.
    <Bug:749600> covers the maximum discard file size.
    Oracle and other 2Gb issues
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~
    This sections lists miscellaneous 2Gb issues:
    - From Oracle 8.0.5 onwards 64bit releases are available on most platforms.
    An extract from the 8.0.5 README file introduces these - see <Note:62252.1>
    - DBV (the database verification file program) may not be able to scan
    datafiles larger than 2Gb reporting "DBV-100".
    This is reported in <Bug:710888>
    - "DATAFILE ... SIZE xxxxxx" clauses of SQL commands in Oracle must be
    specified in 'M' or 'K' to create files larger than 2Gb otherwise the
    error "ORA-02237: invalid file size" is reported. This is documented
    in <Bug:185855>.
    - Tablespace quotas cannot exceed 2Gb on releases before Oracle 7.3.4.
    Eg: ALTER USER <username> QUOTA 2500M ON <tablespacename>
    reports
    ORA-2187: invalid quota specification.
    This is documented in <Bug:425831>.
    The workaround is to grant users UNLIMITED TABLESPACE privilege if they
    need a quota above 2Gb.
    - Tools which spool output may error if the spool file reaches 2Gb in size.
    Eg: sqlplus spool output.
    - Certain 'core' functions in Oracle tools do not support large files -
    See <Bug:749600> which is fixed in Oracle 8.0.6 and 8.1.6.
    Note that this fix is NOT in Oracle 8.1.5 nor in any patch set.
    Even with this fix there may still be large file restrictions as not
    all code uses these 'core' functions.
    Note though that <Bug:749600> covers CORE functions - some areas of code
    may still have problems.
    Eg: CORE is not used for SQL*Loader input file I/O
    - The UTL_FILE package uses the 'core' functions mentioned above and so is
    limited by 2Gb restrictions Oracle releases which do not contain this fix.
    <Package:UTL_FILE> is a PL/SQL package which allows file IO from within
    PL/SQL.
    Port Specific Information on "Large Files"
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Below are references to information on large file support for specific
    platforms. Although every effort is made to keep the information in
    these articles up-to-date it is still advisable to carefully test any
    operation which reads or writes from / to large files:
    Platform See
    ~~~~~~~~ ~~~
    AIX (RS6000 / SP) <Note:60888.1>
    HP <Note:62407.1>
    Digital Unix <Note:62426.1>
    Sequent PTX <Note:62415.1>
    Sun Solaris <Note:62409.1>
    Windows NT Maximum 4Gb files on FAT
    Theoretical 16Tb on NTFS
    ** See <Note:67421.1> before using large files
    on NT with Oracle8
    *2 There is a problem with DBVERIFY on 8.1.6
    See <Bug:1372172>

    I'm not aware of a packaged PL/SQL solution for this in Oracle 8.1.7.3 - however it is very easy to create such a program...
    Step 1
    Write a simple Java program like the one listed:
    import java.io.File;
    public class fileCheckUtl {
    public static int fileExists(String FileName) {
    File x = new File(FileName);
    if (x.exists())
    return 1;
    else return 0;
    public static void main (String args[]) {
    fileCheckUtl f = new fileCheckUtl();
    int i;
    i = f.fileExists(args[0]);
    System.out.println(i);
    Step 2 Load this into the Oracle data using LoadJava
    loadjava -verbose -resolve -user user/pw@db fileCheckUtl.java
    The output should be something like this:
    creating : source fileCheckUtl
    loading : source fileCheckUtl
    creating : fileCheckUtl
    resolving: source fileCheckUtl
    Step 3 - Create a PL/SQL wrapper for the Java Class:
    CREATE OR REPLACE FUNCTION FILE_CHECK_UTL (file_name IN VARCHAR2) RETURN NUMBER AS
    LANGUAGE JAVA
    NAME 'fileCheckUtl.fileExists(java.lang.String) return int';
    Step 4 Test it:
    SQL> select file_check_utl('f:\myjava\fileCheckUtl.java') from dual
    2 /
    FILE_CHECK_UTL('F:\MYJAVA\FILECHECKUTL.JAVA')
    1

  • Limitation on a material basis or another filed in recipe.

    Hi All,
    I wasnt to grant access to recipes to users depending on the data that is given in the recipe (e.g. per application field or material,u2026).E.g. some users are allowed to see/change recipe of material XYZ, while other users are not allowed to see/change a recipe of material XYZ.
    So , can you please find out, if there is any possibility to create a check or whatsoever to include such an access limitation on a material basis (or another field in the recipe) on recipes?
    Regards
    satish

    It is not common that authorizations must be based on material codes of the recipe header;
    it is usual , however, to create some roles based on Authotization objet VAGRP from C_ROUT_MAT.-
    Many VAGRP ( or Planner Group in the Recipe Header) can be created in Customizing and assigned to the recipe header.
    Regards

  • Limited Admin Privileges/Specific Elevation of User Accounts

    I'm hoping to create an account on my laptop for my roommate.  I don't want him to have a full admin account, but he knows enough about computers that he could troubleshoot networking, and I want to enable him to install programs on the system.  I'm not sure what the best way to go about creating an account which can elevate itself for specific tasks; I've never modified my sudoers file before, and I don't know how to do so to grant him access to the privileges he should have.  I don't want to force him to use Terminal; I'd rather have him be able to enter a username/password for Admin privileges when prompted, whether that's his standard user account or a limited Admin account, but I want to make sure that account DOESN'T have access to modify anything in Users & Groups, can't create accounts with dscl, can't modify the keychain or hard drive partitions, etc. 
    Am I right in thinking the sudoers file is the best way to approach this?  How do I find out what processes to allow access to?  Does Network Preferences, for example, have any dependencies he will also need to be able to run?  Also, is there a good starting point/article on modifying the sudoers file for this type of thing anywhere?  <<clearly googling the wrong thing because my searches just tell me how to add someone to the sudoers file>>

    To modify network settings he needs to be able to unlock the preference pane. If you can unlock one pane you can unlock them all including Users & Groups.
    While it is more feasible allow him some latitude in the application installing scenario it's going to be a pain. The non-server version of OS X is just not setup for this. Either a user has admin privileges or he doesn't there is no part way.
    Again if you trust him then you should also trust him not to do what you don't want him to do. If you tell him he can do x but please don't do y and you think he won't abide by your rules then giving him any access is potential trouble.
    And again if he can get to the machine when you are not around he can do what he likes, privileges or no privileges.
    good luck,
    regards

  • Hospital management and Grant management addon for SAP B1 2005A / 2007 A

    Dear friends,
    I like to know if any company has made any addon on Hospital management and Grant management for SAP B1 2005 A or SAP B1 2007 A , in english.
    In case anybody know pl provide the info in company website,contact info and some details on add-on features.
    Regards,
    Pankaj Gandhi.

    Hi there,
    Try these guys - I found them on the SSP catalog
    Quintegra Solutions Limited
    they have a certified add-on entitled  "Hospital Management & Information System"
    or
    PT MITRAIS
    http://www.mitrais.com/medical/medicalHospital.asp
    regards,
    Stella (Partner Service Advisor)

  • Granting users Site admin to All site collections and/or Adding an o365 group by email to site admin group on all Site collections

    We will have 1000s of site collections.
    Why doesn't SharePoint Online 2013 offer a way to grant a user or a group Site admin rights to all site collections?
    And.. if we must add the user to every single site, can this be done by an o365 or ADFS group using it's email?
    We'd like to run this script to add a group to site collection admin on all sites, but Groups can't be referenced by an email?
    Get-SPOSite|foreach{Set-SPOUser -Site $_.Url -LoginName [email protected] -IsSiteCollectionAdmin $True}
    produces an error.  And if we try to add the group by email manually through the UI it can't find it either.   We've tried this with o365 groups and  ADFS groups.
    Any way to reference these groups from PowerShell?
    Is this limitation there for a reason? 

    bump.. anybody?

  • Limited Access in parent objects not working

    In my scenario I've a document library with several document sets. I break the role inheritence in the document set and provide individual permissions for users.
    The thing is, that no limited access is granted to the parent object (list and web). Does that change in SharPoint 2013? I saw that the method AddToCurrentScopeOnly is new for developers - does SharePoint use that method, too for adding users?
    Is there a way how I can change the behavoiur to the old one (like 2010)? Or what is the recommended workaround.
    Thanks in advance!

    Hi Fidy13,
    In SharePoint 2010, it shows the Limited Access permission in the parent scope. For site administrators, it is very hard to manage site permissions because they cannot make sure where the limited users have permissions or not.
    In SharePoint 2013, although it doesn’t show Limited Access permission in the parent scope, the users who have limited aceess permission can access the specified scope, and when you remove the permission in the specified scope, the limited access is removed.
    It is convenient to site adminitrators to manage the site. So , I don’t recommand you change it.
    Here is a similar post for you to take a look at:
    http://social.msdn.microsoft.com/Forums/windowsapps/en-US/8ce65f4b-fe3b-4326-b4c3-41c0fb427c40/sharepoint-2013-limited-access-permission?forum=sharepointgeneral
    I hope this helps.
    Thanks,
    Wendy
    Wendy Li
    TechNet Community Support

  • Home server limitations

    Heyo,
    tl;dr at bottom
    I currently have a home server which has been running well however my storage requirements keep going up and have reached a point where a change is needed.
    my system has a nonRAID HBA with connectivity for up to 16 drives.
    I knew my storage requirements would grow but i have not anticipated the speed at which it has.
    atm my system boots from an ssd, runs samba, ssh, sql, vpn, multiple simple nginx sites,
    the storage setup atm is dm-raid with 7 drives in RAID5 with a drive soon to be added to provide RAID6.
    I have ext4 on luks however i have come to a point where i cannot expand the array past 16TB, which i thought was no longer a problem with ext4
    Google searches however showed me that it must be created above 16TB initially and that i cannot resize past 16TB.
    i chose ext4 as i was expected to convert to btrfs once it became viable.
    my questions now are.
    where should i go from here. im currently stuck in a 15TB filesystem quickly running out of space.
    its important for me to have a single volume of data, with data protection(hd failure resolution+encryption) and becoming a concern in the long term is data integrity.
    With the choice to convert over to btrfs i have another question in my head regarding raid/encryption. as i understand btrfs has raid1/0 support and potentially raid5+ in the future,
    i currently like the filesystem ontop of luks ontop of dmraid, it makes sense to me.
    if i change to btrfs does this layering choice make sense?
    can someone recommend a more appropriate choice?
    in the not to distant future my storage requirements may go up to around 30TB+ easily.
    as i see it i can do the following.
    1. keep dm-raid+luks and replace ext4 with btrfs and grow volume(via convert filesystem) - (ok for meantime but an array of may disks like this doesnt sound ideal)
    2. change to a different filesystem that supports multi-device large volumes with the features requested.
    3. split up the storage into multiple smaller volumes(least preferred)
    advice would be very much appreciated,
    tl;dr HALP! many disk RAID array wont grow above 16TB(ext4) and need a solution for future.

    I think that none of the setups both of you describe is really hacky. Layering several filesystems and whatnot on top of each other works fine and people do it all the time. I am, however, a bit critical of both approaches. (Particularly as a fan of ZFS, so consider me biased.)
    1. If seiichiro0185 uses a limited number of 4 disks as raid-z inside a NAS, then this is a perfectly fine solution. But with that setup he won't ever experience that large raid-z arrays with the wrong number of disks (for more info see here: http://www.solarisinternals.com/wiki/in … ces_Guide) or even several raid-z arrays grouped together (even more info here: http://constantin.glez.de/blog/2010/06/ … rformance) will give you performance hits.
    2. If smelly has 7 disks he wants to keep (and planning to add more over time, if I understood correctly), I would recommend what I wrote above: Creating a striped ZFS pool from sets of mirrors, similar to RAID-10, by using zpool add and zpool attach. This will give you less space than RAID-Z (granted) and maybe slower r/w speeds (don't know enough about that), but your CPU will have to do less work and you will be more flexible in adding/removing disks. Of course, you need an even number of disks for that, so 6 for now. creating mirrors of same-size (or similar size) disks and glueing them together to make one pool. You can add more mirrors to extend the pool over time, or, if one disk in a mirror fails you can replace it with a bigger one. As soon as you replace the other disk, ZFS will automatically grow the pool.
    And keep in mind that setting up ZFS on top of anything else but physical disks (and most definitely placing ZFS anywhere above an mdadm layer) will practically disable ZFS's error correction magic!
    I could go on and on about this but I'll stop here. Hope you don't mind the pamphlet.

Maybe you are looking for

  • Can i take music off of my ipod and put it in itunes using icloud?

         My computer crashed in august. My itunes recovery disks failed and so did my attempt to recover music from my computer. Now all my music is on my ipod and I would like a way to take that music and put it in itunes on my new pc. Does anyone know

  • My laptop thinks it's seeing an ugly duckling and not a swan

    I'm trying to download mp3 files to a sdhc, I get an Error 0x80070022 and asked to insert a disk, one of my other sdhc's is having no trouble at all and is formatted the same. HP laptop running win7

  • Outgoing payment file generation formatting issue

    Hi Everyone, I have implemente a custom outgoing formatting method to generate an .txt payment file for external system. The file in generated in AL11 directory but has hashes(#) after every character. Are there any config setting that im missing? Ap

  • Movieclip convert to a graphic

    hello every one im trying to add to a movieclip called "movimiento" a graphic  property var movimiento: Graphics = faceRectContainer.graphics; but the following error occurs: /Users/ignacionieto/Documents/AS3/OpenCVFlash/Marilena_mod10/src/FaceDetect

  • Fonts not Carying Over

    I have a file where I'm using the font "Symbols" to make a block which is the letter n, when I export it does so with a time new roman n instead of the block.  Do you have any idea how to fix this? I am also having a problem deleting pages, I delete