Best practice for SSH access by a user across multiple Xserves?

Hello.
I have 3 Xserves and a Mac Mini server I'm working with and I need SSH access to all these machines. I have given myself access via SSH in Server Admin access settings and since all 4 servers are connected to an OD Master (one of the three Xserves), I'm able to SSH into all 4 machines using my username/password combination.
What I'm unsure of though is, how do I deal with my home folder when accessing these machines? For example, currently, when I SSH into any of the machines, I get an error saying...
CFPreferences: user home directory at /99 is unavailable. User domains will be volatile.
It then asks for my password, which I enter, and then I get the following error...
Could not chdir to home directory 99: No such file or directory
And then it just dumps me into the root of the server I'm trying to connect to.
How should I go about dealing with this? Since I don't have a local home directory on any of these servers, it has no where to put me. I tried enabling/using a network home folder, but I end up with the same issue. Since the volume/location designated as my home folder isn't mounted on the servers I'm trying to connect to (and since logging in via SSH doesn't auto-mount the share point like AFP would if I was actually logging into OS X via the GUI), it again says it can't find my home directory and dumps me into the root the server I've logged in to.
If anyone could lend some advice on how to properly set this up, it would be much appreciated!
Thanks,
Kristin.

Should logging in via SSH auto-mount the share point?
Yes, of course, but only if you've set it up that way.
What you need to do is designate one of the servers as being the repository of home directories. You do this by simply setting up an AFP sharepoint on that server (using Server Admin) and checking the 'enable user home directories' option.
Then you go to Workgroup Manager and select your account. Under the Home tab you'll see the options for where this user's home directory is. It'll currently say 'None' (indicating a local home directory on each server). Just change this to select the recently-created sharepoint from above.
Save the account and you're done. When you login each server will recognize that your home directory is stored on a network volume and will automatically mount that home directory for you.

Similar Messages

  • Best Practices for Data Access

    Good morning!
    I was wondering if someone might give me some advice on some best practices for retrieving data from a SQL server in the cloud via a desktop application?
    I'm curious if I embed into my desktop application the server address (IP, or Domain or whatever) and allow the users to provide their own usernames and passwords when using the application, if there was anything "wrong" with that? Where-in my
    application collects the username and password from the user, connects to a server with that username and password, retrieves the data and uses it in-app.
    I'm petrified of security issues and I would hate to start using a SQL database with this setup only to find out that anyone could download x, y or z and connect to the database and see everything.
    Assuming I secure all of the users with limited permissions, is there anything wrong with exposing a SQL server to the web for my application to use? If so, what and what would be a reasonable alternative?
    I really appreciate any help and feedback!

    There are two options, none of them very palatable:
    1) One is to create a domain, and add the VM and your local box to it.
    2) Stick to a workgroup, but have the same user name and password on both machines.
    In practice, a better option is to create an SQL login that is member of sysadmin - or who have rights to impersonate an account that is member of sysadmin. And for that matter, you could use the built-in sa account - but you rename it to something else.
    The other day I was looking at the error log from a server that apparently had been exposed on the net. The log was full with failed login attempts for sa, with occasional attempts for names like usera and so on. The server is in Sweden - the IP address
    for the login attempts were in China.
    Just so know what you can expect.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Best practice for select access to users

    Not sure if this is the correct forum to post, if not then let me know where should I post.
    From my understanding this is the best forum to ask this questions.
    Are you aware of any "Best Practice Document" to grant select accesses to users on databases. These users are developers which select data out of database for the investigation and application bug fix.
    From time to time user want more and more access to different tables so that they can do investigation properly.
    Let me know if there exists a best practice document around this space.
    Asked in this forum as this is related to PL/SQL access.

    Welcome to the forum!
    Whenever you post provide your 4 digit Oracle version.
    >
    Are you aware of any "Best Practice Document" to grant select accesses to users on databases. These users are developers which select data out of database for the investigation and application bug fix.
    From time to time user want more and more access to different tables so that they can do investigation properly.
    Let me know if there exists a best practice document around this space.
    >
    There are many best practices documents about various aspects of security for Oracle DBs but none are specific to developers doing invenstigation.
    Here is the main page for Oracles' OPAC white papers about security.
    http://www.oracletechnetwork-ap.com/topics/201207-Security/resources_whitepaper.cfm
    Take a look at the ones on 'Oracle Identity Management' and on 'Developers and Identity Services'.
    http://www.dbspecialists.com/files/presentations/implementing_oracle_11g_enterprise_user_security.pdf
    This paper by Database Specialists shows how to use Oracle Identity Management to limit access to users such as developers through the use of roles. It shows some examples of users using their own account but having limited privileges based on the role they are given.
    http://www.dbspecialists.com/files/presentations/implementing_oracle_11g_enterprise_user_security.pdf
    And this Oracle White Paper, 'Oracle Database Security Checklist', is a more basic security doc that discusses the entire range of security issues that should be considered for an Oracle Database.
    http://www.oracle.com/technetwork/database/security/twp-security-checklist-database-1-132870.pdf
    You don't mention what environment (PROD/QA/TEST/DEV) you are even talking about or whether the access is to deal with emergency issues or general reproduction and fixing of bugs.
    Many sites create special READONLY roles, eg. READ_ONLY_APP1, and then grant privileges to those roles for tables/objects that application uses. Then that role can be granted to users that need privileges for that application and can be revoked when they no longer need it.
    Some sites prefer creating special READONLY users that have those read only roles. If a user needs access the DBA changes the password and provides the account info to the user. When the user has completed their duties the DBA resets the password to something no one else knows.
    Those special users have auditing on them and the user using them is responsible for all activity recorded in the logs during the time the user has access to that account.
    In general you grant the minimum privileges needed and revoke them when they are no longer needed; generally through the use of roles.
    >
    Asked in this forum as this is related to PL/SQL access.
    >
    Please explain that. Your question was about 'access to different tables'. How does PL/SQL access fit into that?
    The important reason for the difference is that access is easily controlled thru the use of roles but in named PL/SQL blocks roles are disabled. So those special roles and accounts mentioned above are well-suited to allowing developers to query data but are not well-suited if the user needs to execute PL/SQL code belonging to another schema (the app schema).

  • Obiee 11g : Best practice for filtering data allowed to user

    Hi gurus,
    I have a table of the allowed areas for each user.
    I want to show only the data facts associated with these allowed areas.
    For instance my user scott can see France and Italy data.
    I made a variable session. I put this session variable in a filter.
    It works ok but only one value (the first one i think) is taken in account (for instance, with my solution scott will see only france data).
    I need all the possible values.
    I tried with the row wise parameter of the variable session. But it doesn't work (error obiee).
    I've read things on internet about using stragg or valuelistof but neither worked.
    What would be the best practice to achieve this goal of filtering data with conditions by user stored in database ?
    Thanks in advance, Emmanuel

    Check this link
    http://oraclebizint.wordpress.com/2008/06/30/oracle-bi-ee-1013332-row-level-security-and-row-wise-intialized-session-variables/

  • Any Best Practices for Guest Access?

    Looking to create a guest access WLan so that Vendors can have internet access along with vpn into their own network while disallowing access to our internal systems.
    I have created a Guest WLan and configured it on the WLC side. I think all I have to do now is to configure the core switch with athe New 99 Vlan along with configuring the trunk ports connected to the WLC's.
    My question is, am I missing anything in the setup? and are there any "best practices" wen it comes to Guest access? I am hoping to use web-passthru authentication. I dont believe this requires any AAA or Radius servers which we dont have set up. I will probably just want a single "guest" account which will provide internet access without allowing access to the internal lan. Am I on the right track here?

    ***************Guest WLC****************** (Cisco Controller) >show mobility summary Symmetric Mobility Tunneling (current) .......... Enabled Symmetric Mobility Tunneling (after reboot) ..... Enabled Mobility Protocol Port........................... 16666 Default Mobility Domain.......................... DMZ Multicast Mode .................................. Disabled Mobility Domain ID for 802.11r................... 0x43cd Mobility Keepalive Interval...................... 10 Mobility Keepalive Count......................... 3 Mobility Group Members Configured................ 2 Mobility Control Message DSCP Value.............. 0 Controllers configured in the Mobility Group MAC Address        IP Address      Group Name                        Multicast 00:19:aa:72:2e:e0  10.192.60.44    Champion Corp                    0.0.0.0 00:19:aa:72:39:80  10.100.100.20    DMZ                              0.0.0.0 (Cisco Controller) > ***************Corp WLC***************** (Cisco Controller) >show mobility summary Symmetric Mobility Tunneling (current) .......... Enabled Symmetric Mobility Tunneling (after reboot) ..... Enabled Mobility Protocol Port........................... 16666 Default Mobility Domain.......................... Champion Corp Multicast Mode .................................. Disabled Mobility Domain ID for 802.11r................... 0x46d5 Mobility Keepalive Interval...................... 10 Mobility Keepalive Count......................... 3 Mobility Group Members Configured................ 2 Mobility Control Message DSCP Value.............. 0 Controllers configured in the Mobility Group MAC Address        IP Address      Group Name                        Multicast IP    Status 00:19:aa:72:2e:e0  10.192.60.44    Champion Corp                    0.0.0.0          Up 00:19:aa:72:39:80  10.100.100.20    DMZ                              0.0.0.0          Up (Cisco Controller) >

  • Best Practice for setting BPM Task Potential Users

    Hello,
    Can anyone help me with one doubt I have with BPM?
    When I'm configuring the BPM Task I have to set the Potential Users, also I know it can be set through an expression. However, my doubt is the following,
    If I set the potential user in the BPM Task, everytime the task change of responsible user I will have to go to NWDS change the BPM Task Potential User, Build and Deploy again the BPM? That's a lot of work.
    Which is the best practive for doing this kind of maintenance?
    Regards
    SU

    you can assign the task to group.
    so you only have to change at UME side, add or remove user to/from the group

  • Best Practice for storing PDF docs

    My client has a number of PDF documents for handouts that go
    with his consulting business. He wants logged in users to be able
    to download the PDF docs for handouts at training. The question is,
    what is the 'Best Practice' for storing/accessing these PDF files?
    I'm using CF/MySQL to put everything else together and my
    thought was to store the PDF files in the db. Except! there seems
    to be a great deal of talk about BLOBs and storing files this way
    being inefficient.
    How do I make it so my client can use the admin tool to
    upload the information about the files and the files themselves,
    not store them in the db but still be able to find them when the
    user want's to download them?

    Storing documents outside the web root and using
    <cfcontent> to push their contents to the users is the most
    secure method.
    Putting the documents in a subdirectory of the web root and
    securing that directory with an Application.cfm will only protect
    .cfm and .cfc files (as that's the only time that CF is involved in
    the request). That is, unless you configure CF to handle every
    request.
    The virtual directory is no safer than putting the documents
    in a subdirectory. The links to your documents are still going to
    look like:
    http://www.mysite.com/virtualdirectory/myfile.pdf
    Users won't need to log in to access these documents.
    <cfcontent> or configuring CF to handle every request
    is the only way to ensure users have to log in before accessing
    non-CF files. Unless you want to use web-server
    authentication.

  • Best practice for external but secure access to internal data?

    We need external customers/vendors/partners to access some of our company data (view/add/edit).  It’s not so easy as to segment out those databases/tables/records from other existing (and put separate database(s) in the DMZ where our server is).  Our
    current solution is to have a 1433 hole from web server into our database server.  The user credentials are not in any sort of web.config but rather compiled in our DLLs, and that SQL login has read/write access to a very limited number of databases.
    Our security group says this is still not secure, but how else are we to do it?  Even if a web service, there still has to be a hole in somewhere.  Any standard best practice for this?
    Thanks.

    Security is mainly about mitigation rather than 100% secure, "We have unknown unknowns". The component needs to talk to SQL Server. You could continue to use http to talk to SQL Server, perhaps even get SOAP Transactions working but personally
    I'd have more worries about using such a 'less trodden' path since that is exactly the areas where more security problems are discovered. I don't know about your specific design issues so there might be even more ways to mitigate the risk but in general you're
    using a DMZ as a decent way to mitigate risk. I would recommend asking your security team what they'd deem acceptable.
    http://pauliom.wordpress.com

  • Best Practices for user ORACLE

    Hello,
    I have few linux servers with user ORACLE.
    All the DBAs in the team connecting and working on the servers as ORACLE user and they dont have sperate account.
    I create for each DBA its own account and would like them to use it.
    The problem is that i dont want to lock the ORACLE account since i need it for installation/upgrade and etc , but yet i dont what
    the DBA Team to connect and work with the ORACLE user.
    What are the Best Practice for souch case ?
    Thanks

    To install databases you don't need acces to Oracle.
    Also installing 'few databases every month' is fundamentally wrong as your server will run out of resources, and Oracle can host multiple schemas in one database.
    "One reason for example is that we have many shell scripts that user ORACLE is the owner of them and only user ORACLE have a privilege to execute them."
    Database control in 10g and higher makes 'scripts' obsolete. Also as long as you don't provide w access to the dba group there is nothing wrong in providing x access.
    You now have a hybrid situation: they are allowed interactively to screw 'your' databases, yet they aren't allowed to run 'your' script.
    Your security 'model' is in urgent need of revision!
    Sybrand Bakker
    Senior Oracle DBA

  • Best practices for setting up users on a small office network?

    Hello,
    I am setting up a small office and am wondering what the best practices/steps are to setup/manage the admin, user logins and sharing privileges for the below setup:
    Users: 5 users on new iMacs (x3) and upgraded G4s (x2)
    Video Editing Suite: Want to connect a new iMac and a Mac Pro, on an open login (multiple users)
    All machines are to be able to connect to the network, peripherals and external hard drive. Also, I would like to setup drop boxes as well to easily share files between the computers (I was thinking of using the external harddrive for this).
    Thank you,

    Hi,
    Thanks for your posting.
    When you install AD DS in the hub or staging site, disconnect the installed domain controller, and then ship the computer to the remote site, you are disconnecting a viable domain controller from the replication topology.
    For more and detail information, please refer to:
    Best Practices for Adding Domain Controllers in Remote Sites
    http://technet.microsoft.com/en-us/library/cc794962(v=ws.10).aspx
    Regards.
    Vivian Wang

  • Best Practice for Deleted AD Users

    In our environment, we are not using AD groups. Users are being added individually. We are running User Profile Service but I am aware that when a user is deleted in AD, they stay in the content database in the UserInfo table so that some metadata can be
    retained (created by/modified by/etc).
    What are best practices for whether or not to get rid of them from the content database(s)?
    What do some of you consultants/admins out there do about this? It was brought up as a concern to me that they are still being seen in some list permissions/people picker, etc.
    Thank you!

    Personally I would keep them to maintain metadata consistency (Created By etc as you say). I've not had it raised as a concern anywhere I've worked.
    However, there are heaps of resources online to delete such users (even in bulk via Powershell). As such, I am unaware of cases of deleting them causing major problems.
    w: http://www.the-north.com/sharepoint | t: @JMcAllisterCH | YouTube: http://www.youtube.com/user/JamieMcAllisterMVP

  • Best Practices for Accessing the Configuration data Modelled as XML File in

    Hi,
    I refer the couple of blof posts/Forum threads on How to model and access the Configuration data as XML inside OSB.
    One of the easiest and way is to
    Re: OSB: What is best practice for reading configuration information
    Another could be
    Uploading XML data as .xq file (Creating .xq file copy paste all the Configuration as XML )
    I need expert answers for following.
    1] I have .xsd file which is representing the Configuration data. Structure of XSD is
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue</Config>
    <FrameworkConfig>
    2] As my project will move from one env to another the property-value will change according to the Environment...
    For Dev:
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Dev</Config>
    <FrameworkConfig>
    For Stage :
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Stage</Config>
    <FrameworkConfig>
    3] Let say I create the following Folder structure to store the Configuration file specific for dev/stage/prod instance
    OSB Project Folder
    |
    |---Dev
    |
    |--Dev_Config_file.xml
    |
    |---Stage
    |
    |--Stahe_Config_file.xml
    |
    |---Prod
    |
    |-Prod_Config_file.xml
    4] I need a way to load these property file as xml element/variable inside OSb message flow.?? I can't use XPath function fn:doc("URL") coz I don't know exact path of XMl on deployed server.
    5] Also I need to lookup/model the value which will specify the current server type(Dev/Stage/prod) on which OSB MF is running. Let say any construct which will act as a Global configuration and can be acccessible inside the OSb message flow. If I get the vaalue for the Global variable as Dev means I will load the xml config file under the Dev Directory @runtime containing key value pair for Dev environment.
    6] This Re: OSB: What is best practice for reading configuration information
    suggest the designing of the web application which will serve the xml file over the http protocol and getting the contents into variable (which in turn can be used in OSB message flow). Can we address this problem without creating the extra Project and adding the Dependencies? I read configuration file approach too..but the sample configuration file doesn't show entry of .xml file as resources
    Hope I am clear...I really appreciate your comments and suggestion..
    Sushil
    Edited by: Sushil Deshpande on Jan 24, 2011 10:56 AM

    If you can enforce some sort of naming convention for the transport endpoint for this proxy service across the environments, where the environment name is part of the endpoint you may able to retrieve it from $inbound in the message pipeline.
    eg. http://osb_host/service/prod/service1 ==> Prod and http://osb_host/service/prod/service2 ==> stage , then i think $inbound/ctx:transport/ctx:uri can give you /service/prod/service1 or /service/stage/service1 and applying appropriate xpath functions you will be able to extract the environment name.
    Chk this link for details on $inbound/ctx:transport : http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/userguide/context.html#wp1080822

  • Best practice for how to access a set of wsdl and xsd files

    I've recently beeing poking around with the Oracle ESB, which requires a bunch of wsdl and xsd files from HOME/bpel/system/xmllib. What is the best practice for including these files in a BPEL project? It seems like a bad idea to copy all these files into every project that uses the ESB, especially if there are quite a few consumers of the bus. Is there a way I can reference this directory from the project so that the files can just stay in a common place for all the projects that use them?
    Bret

    Hi,
    I created a project (JDeveloper) with local xsd-files and tried to delete and recreate them in the structure pane with references to a version on the application server. After reopening the project I deployed it successfully to the bpel server. The process is working fine, but in the structure pane there is no information about any of the xsds anymore and the payload in the variables there is an exception (problem building schema).
    How does bpel know where to look for the xsd-files and how does the mapping still work?
    This cannot be the way to do it correctly. Do I have a chance to rework an existing project or do I have to rebuild it from scratch in order to have all the references right?
    Thanks for any clue.
    Bette

  • Best practice for ASA Active/Standby failover

    Hi,
    I have configured a pair of Cisco ASA in Active/ Standby mode (see attached). What can be done to allow traffic to go from R1 to R2 via ASA2 when ASA1 inside or outside interface is down?
    Currently this happens only when ASA1 is down (shutdown). Is there any recommended best practice for such network redundancy?  Thanks in advanced!

    Hi Vibhor,
    I test ping from R1 to R2 and ping drop when I shutdown either inside (g1) or outside (g0) interface of the Active ASA. Below is the ASA 'show' failover' and 'show run',
    ASSA1# conf t
    ASSA1(config)# int g1
    ASSA1(config-if)# shut
    ASSA1(config-if)# show failover
    Failover On
    Failover unit Primary
    Failover LAN Interface: FAILOVER GigabitEthernet2 (up)
    Unit Poll frequency 1 seconds, holdtime 15 seconds
    Interface Poll frequency 5 seconds, holdtime 25 seconds
    Interface Policy 1
    Monitored Interfaces 3 of 60 maximum
    Version: Ours 8.4(2), Mate 8.4(2)
    Last Failover at: 14:20:00 SGT Nov 18 2014
            This host: Primary - Active
                    Active time: 7862 (sec)
                      Interface outside (100.100.100.1): Normal (Monitored)
                      Interface inside (192.168.1.1): Link Down (Monitored)
                      Interface mgmt (10.101.50.100): Normal (Waiting)
            Other host: Secondary - Standby Ready
                    Active time: 0 (sec)
                      Interface outside (100.100.100.2): Normal (Monitored)
                      Interface inside (192.168.1.2): Link Down (Monitored)
                      Interface mgmt (0.0.0.0): Normal (Waiting)
    Stateful Failover Logical Update Statistics
            Link : FAILOVER GigabitEthernet2 (up)
            Stateful Obj    xmit       xerr       rcv        rerr
            General         1053       0          1045       0
            sys cmd         1045       0          1045       0
            up time         0          0          0          0
            RPC services    0          0          0          0
            TCP conn        0          0          0          0
            UDP conn        0          0          0          0
            ARP tbl         2          0          0          0
            Xlate_Timeout   0          0          0          0
            IPv6 ND tbl     0          0          0          0
            VPN IKEv1 SA    0          0          0          0
            VPN IKEv1 P2    0          0          0          0
            VPN IKEv2 SA    0          0          0          0
            VPN IKEv2 P2    0          0          0          0
            VPN CTCP upd    0          0          0          0
            VPN SDI upd     0          0          0          0
            VPN DHCP upd    0          0          0          0
            SIP Session     0          0          0          0
            Route Session   5          0          0          0
            User-Identity   1          0          0          0
            Logical Update Queue Information
                            Cur     Max     Total
            Recv Q:         0       9       1045
            Xmit Q:         0       30      10226
    ASSA1(config-if)#
    ASSA1# sh run
    : Saved
    ASA Version 8.4(2)
    hostname ASSA1
    enable password 2KFQnbNIdI.2KYOU encrypted
    passwd 2KFQnbNIdI.2KYOU encrypted
    names
    interface GigabitEthernet0
     nameif outside
     security-level 0
     ip address 100.100.100.1 255.255.255.0 standby 100.100.100.2
     ospf message-digest-key 20 md5 *****
     ospf authentication message-digest
    interface GigabitEthernet1
     nameif inside
     security-level 100
     ip address 192.168.1.1 255.255.255.0 standby 192.168.1.2
     ospf message-digest-key 20 md5 *****
     ospf authentication message-digest
    interface GigabitEthernet2
     description LAN/STATE Failover Interface
    interface GigabitEthernet3
     shutdown
     no nameif
     no security-level
     no ip address
    interface GigabitEthernet4
     nameif mgmt
     security-level 0
     ip address 10.101.50.100 255.255.255.0
    interface GigabitEthernet5
     shutdown
     no nameif
     no security-level
     no ip address
    ftp mode passive
    clock timezone SGT 8
    access-list OUTSIDE_ACCESS_IN extended permit icmp any any
    pager lines 24
    logging timestamp
    logging console debugging
    logging monitor debugging
    mtu outside 1500
    mtu inside 1500
    mtu mgmt 1500
    failover
    failover lan unit primary
    failover lan interface FAILOVER GigabitEthernet2
    failover link FAILOVER GigabitEthernet2
    failover interface ip FAILOVER 192.168.99.1 255.255.255.0 standby 192.168.99.2
    icmp unreachable rate-limit 1 burst-size 1
    asdm image disk0:/asdm-715-100.bin
    no asdm history enable
    arp timeout 14400
    access-group OUTSIDE_ACCESS_IN in interface outside
    router ospf 10
     network 100.100.100.0 255.255.255.0 area 1
     network 192.168.1.0 255.255.255.0 area 0
     area 0 authentication message-digest
     area 1 authentication message-digest
     log-adj-changes
     default-information originate always
    route outside 0.0.0.0 0.0.0.0 100.100.100.254 1
    timeout xlate 3:00:00
    timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02
    timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00
    timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00
    timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute
    timeout tcp-proxy-reassembly 0:01:00
    timeout floating-conn 0:00:00
    dynamic-access-policy-record DfltAccessPolicy
    user-identity default-domain LOCAL
    aaa authentication ssh console LOCAL
    http server enable
    http 10.101.50.0 255.255.255.0 mgmt
    no snmp-server location
    no snmp-server contact
    snmp-server enable traps snmp authentication linkup linkdown coldstart warmstart
    telnet timeout 5
    ssh 10.101.50.0 255.255.255.0 mgmt
    ssh timeout 5
    console timeout 0
    tls-proxy maximum-session 10000
    threat-detection basic-threat
    threat-detection statistics access-list
    no threat-detection statistics tcp-intercept
    webvpn
    username cisco password 3USUcOPFUiMCO4Jk encrypted
    prompt hostname context
    no call-home reporting anonymous
    call-home
     profile CiscoTAC-1
      no active
      destination address http https://tools.cisco.com/its/service/oddce/services/DDCEService
      destination address email [email protected]
      destination transport-method http
      subscribe-to-alert-group diagnostic
      subscribe-to-alert-group environment
      subscribe-to-alert-group inventory periodic monthly
      subscribe-to-alert-group configuration periodic monthly
      subscribe-to-alert-group telemetry periodic daily
    crashinfo save disable
    Cryptochecksum:fafd8a885033aeac12a2f682260f57e9
    : end
    ASSA1#

  • Best-practice for Catalog Views ? :|

    Hello community,
    A best practice question:
    The situtation: I have several product categories (110), several items in those categories (4000) and 300 end-users.    I would like to know which is the best practice for segment the catalog.   I mean, some users should only see categories 10,20 & 30.  Other users only category 80, etc.    The problem is how can I implement this ?
    My first idea is:
    1. Create 110 Procurement Catalogs (1 for every prod.category).   Each catalog should contain only its product category.
    2. Assign in my Org Model, in a user-level all the "catalogs" that the user should access.
    Do you have any idea in order to improve this ?
    Saludos desde Mexico,
    Diego

    Hi,
    Your way of doing will work, but you'll get maintenance issues (to many catalogs, and catalog link to maintain for each user).
    The other way is to built your views in CCM, and assign these views to the users, either on the roles (PFCG) or on the user (SU01). The problem is that with CCM 1.0 this is limitated, cause you'll have to assign one by one the items to each view (no dynamic or mass processes), it has been enhanced in CCM 2.0.
    My advice:
    -Challenge your customer about views, and try to limit the number of views, with for example strategic and non strategic
    -With CCM 1.0 stick to the procurement catalogs, or implement BADIs to assign items to the views (I experienced it, it works, but is quite difficult), but with a limitated number of views
    Good luck.
    Vadim

Maybe you are looking for

  • Special character '#' ODS data upload

    Hi experts!! I have to load R/3 data from PSA to ODS. I have found problems during data load with 0postxt field (description field) because in several FI documents there is '#' inside the text field, for example: "COMPANY X FOR # SALES". The PSA is u

  • Copy Interruptus - what is a ".DS_Store file"?

    Hi, I'm running an old MacBook Pro (3+ years old), and I'm going to try install an SSD drive to speed this tired, but most wonderful critter up.  I've already backed up the HD (500 GB), but the SSD (240GB) is smaller so I'm trying to move files off.

  • How could I Import RAW  photos from sony A300 into Lightroom 1.4.1?

    How could I Import RAW photos from sony A300 into Lightroom 1.4.1? RAW photos files in sony A300 are saved as "*.arw". any one can help?

  • Help me convert filename .jpg.xml to a format i can manipulate

    i have to access some photos from a file with an extension .jpg.xml. how do i convert this to something i can use. im so lost with tech stuff

  • Cheque details for non-invoiced payments

    Hi All, My requirement is to list out all the Cheque details for non-invoiced payments. Is there any transaction to get the details???? Is there any standard program which will list all the cheque details for non-invoice payments. Thanks in advance,